Nvidia is one of the leading companies in the world of artificial intelligence, but is it at risk of failing? Nvidia’s technological advantages and Jensen Huang’s leadership have created what appears to be an unassailable market position, but Stephen Witt’s The Thinking Machine identifies several significant vulnerabilities that could threaten the company’s dominance.
These challenges range from geopolitical risks and manufacturing dependencies to Huang’s refusal to engage with AI safety concerns, plus practical questions about energy consumption and corporate succession planning. If Nvidia doesn’t overcome these obstacles, the company may fall hard. Keep reading to learn more about the three major challenges that Nvidia currently faces.
Image via Roboflow Universe (License). Image cropped.
Table of Contents
Challenge #1: Nvidia’s Dangerous Dependence on Taiwan
The first vulnerability Witt identifies as a reason why Nvidia could crash is its dependence on Taiwan Semiconductor Manufacturing Company (TSMC) to produce its most advanced chips. TSMC has technical expertise and manufacturing precision that would take competitors years or decades to replicate.
(Shortform note: TSMC was founded in 1987 through a Taiwanese government initiative at the perfect moment to capitalize on the global shift toward outsourced chip manufacturing. While Intel controlled 65% of advanced chip production at the time, Intel’s focus on designing and manufacturing its own chips left an opening for a company willing to manufacture chips designed by others. TSMC filled this “foundry” niche, developing manufacturing processes so precise that they now produce chips measured in nanometers. Experts say that for other countries to replicate TSMC’s capabilities, they’d require not just massive investment in manufacturing technology but ecosystems of suppliers and engineering talent.)
TSMC’s primary operations are in Taiwan, a self-governing island that China claims as its territory and has threatened to take by force. Witt explains that China’s increasingly aggressive stance toward Taiwan creates a direct threat to Nvidia’s business model and the broader AI industry that depends on these chips. Any military conflict or economic disruption in Taiwan could immediately halt production of the processors that power the global AI revolution.
Despite his Taiwanese heritage and cultural connections that helped build Nvidia’s relationship with TSMC, Huang publicly downplays these risks. But Witt suggests that this dependence represents one of the most significant long-term challenges facing both Nvidia and the AI ecosystem as a whole. He also notes that it has led to discussion of a “silicon shield”: the idea that the world’s reliance on Taiwanese semiconductor manufacturing might deter Chinese aggression by making the costs of conflict too high for China to bear.
| Testing the Silicon Shield: Why Economic Deterrence May Not Work The “silicon shield” theory assumes that rational calculation will prevent conflict, but some of China’s actions suggest that strategic and ideological goals may outweigh purely economic considerations. Since Witt’s book was completed, China has escalated military preparations despite the enormous economic risks—analysis shows a Taiwan conflict would shrink China’s GDP by nearly 9% and cost the global economy $10 trillion. Yet US officials report that Chinese military exercises around Taiwan serve as “rehearsals” for invasion, with China targeting 2027 as the year it could take Taiwan by force. More tellingly, China has multiple strategies that could bypass the silicon shield’s protection entirely. Rather than invasion, China could use cyberattacks, economic blockades, or a “psychological war” convincing Taiwan to lose confidence in Western military protection while appealing to the cultural heritage Chinese and Taiwanese people share. Polling suggests this might work: 43% of Taiwanese view China as more dependable than America, compared to 49% who favor the US. China seems to believe that centuries of cultural commonality will override their political separation, with Taiwan ultimately choosing reunification over alliance with what Beijing portrays as a declining West. This suggests China may accept severe economic costs to restore what it sees as natural, cultural, and territorial unity. |
Challenge #2: Huang’s Refusal to Engage With AI Safety Concerns
Another critical vulnerability Witt identifies is Huang’s complete dismissal of concerns about the potential risks of artificial intelligence. While prominent researchers have expressed serious concerns about the potential for AI systems to become uncontrollable or even pose existential risks to humanity, Huang’s position is that AI systems are simply processing data and aren’t a threat to human welfare or survival. According to Witt, when he pressed Huang about the potential dangers of the AI systems that Nvidia’s chips enable, Huang refused to engage with the substance of these concerns. Instead, Huang became angry, yelling at Witt and calling his questions ridiculous.
(Shortform note: AI safety concerns fall into two categories, and experts disagree about which poses the greater threat. Those concerned with existential risk worry that AI systems might develop their own goals or pursue tasks too single-mindedly, as in a thought experiment where an AI designed to maximize paperclip production prioritizes that goal over human survival. Such concerns are taken seriously by researchers like Geoffrey Hinton, who won a Nobel Prize for his work on neural networks. But Gary Marcus and Ernest Davis argue in Rebooting AI that these concerns distract from a more credible and immediate danger: the likelihood that we’ll cede decision-making to AI in situations where its lack of understanding proves catastrophic.)
Witt reveals that other Nvidia executives show a similar lack of concern or are reluctant to contradict their CEO. One source told Witt that the executives seem more afraid of Huang yelling at them than they are of possibly contributing to human extinction. Witt presents this as a fundamental tension in the current AI landscape: While some of the most respected researchers in artificial intelligence are warning about potentially catastrophic risks, the CEO of the company enabling these developments refuses to seriously consider such concerns. This suggests that legitimate safety considerations may not receive adequate attention within the company that controls the infrastructure powering AI development.
(Shortform note: Nvidia executives’ reluctance to challenge Huang may reflect a deeper irony: We often mistake validation, from other humans or from AI, for sound reasoning. Mike Caulfield (Verified) explains that AI systems act as “justification machines,” trained to provide responses that validate our views and biases rather than challenge them. Huang’s position on AI safety might exemplify this same dynamic. Research shows that professionals across fields are more likely to trust information that confirms their beliefs, and higher expertise increases this bias. When Nvidia executives don’t openly challenge Huang’s dismissal of AI safety concerns, he may interpret this as rational consensus instead of the suppression of critical thinking.)
Challenge #3: Sustainability Questions About Nvidia’s Future
Finally, Witt highlights two critical challenges that could limit the continued growth of AI systems powered by Nvidia’s technology: environmental impact and organizational continuity.
Energy Consumption
The first challenge is environmental. Data centers filled with thousands of GPUs consume massive amounts of electricity, contributing to growing concerns about AI’s environmental impact. Companies like Google and Microsoft have seen their carbon emissions increase dramatically due to their expanding AI infrastructure. Witt notes that early AI systems consumed power comparable to household appliances, but current systems require enough electricity to power entire neighborhoods. As AI models continue to grow in size and capability, their energy requirements are expanding exponentially, raising questions about whether current development trajectories are environmentally sound.
| The Real Environmental Leverage Points in AI The debate over AI’s environmental impact often presents an implicit choice between halting AI development altogether and accepting unlimited energy consumption to let it continue, but this framing obscures where the real leverage lies. While concerns about AI’s energy use are valid and urgent—data centers now rank as the 11th largest electricity consumer globally—the impact emerges primarily from corporate decisions about AI development and deployment rather than personal use. Research suggests that watching Netflix for an hour consumes roughly the same energy as 26 ChatGPT queries, illustrating that an individual’s AI usage has a relatively modest environmental impact compared to other digital activities. The meaningful decisions happen at the corporate level: how frequently companies like OpenAI release new models, whether tech giants like Microsoft and Google invest in renewable energy sources for their data centers, and whether companies disclose their actual energy consumption and carbon footprints. This creates an interesting position for Nvidia: While the company benefits from increased AI development regardless of its environmental impact, its corporate customers might increasingly demand more energy-efficient chips if environmental costs—or social pressure—become too acute. This dynamic also creates pressure on Nvidia’s customers to fix inefficiencies in their own AI development processes. As one analyst notes, many tasks don’t need AI models to be “better” at generating creative, polished outputs, but to be “right” in providing accurate answers, a distinction that matters for justifying energy consumption. When companies release new models every few weeks, they often invest vast amounts of energy for marginal gains that users barely notice. For example, for GPT-4, which dramatically outperformed GPT-3 on benchmarks, real-world differences were subtle. Plus, the current approach of retraining models, rather than updating them with new information, represents a massive inefficiency that might be solved through “model editing” techniques that could reduce energy consumption while keeping AI systems current. |
Succession at Nvidia
The second challenge is organizational. Witt reveals that Nvidia has no clear succession plan for Jensen Huang. His eventual departure could create significant challenges for a company so closely identified with its leader’s vision. The flat organizational structure that has served Nvidia well under Huang’s leadership may become a liability in transition scenarios, since there is no clear second-in-command, and more than 60 people report directly to the CEO.
This concentration of decision-making authority in a single individual creates vulnerabilities for a company whose market value depends heavily on continued strategic vision. Given Nvidia’s central role in AI development, any disruption to its leadership or strategic direction could have far-reaching implications for the pace and direction of AI progress worldwide.
(Shortform note: Nvidia’s succession challenges spotlight a broader problem with Silicon Valley’s “great man” narratives: founder-focused stories that reinforce the idea that individual genius drives technological progress. This obscures the collective effort that made AI possible: decades of research by computer scientists, an open-source software movement that enabled experimentation, the engineering that built the foundations of neural networks, and social movements demanding more capable technology, which all created the conditions for AI breakthroughs. It also suggests we wouldn’t have modern AI without Huang, when had Nvidia not existed, the same forces might have driven someone else to play a similar role.)
Learn More About Nvidia’s Potential Crash
To better understand whether Nvidia will crash in its broader context, take a look at Shortform’s guide to the book The Thinking Machine by Stephen Witt.