In this episode of All-In with Chamath, Jason, Sacks & Friedberg, the hosts examine the intensifying competition between OpenAI and Anthropic, analyzing how their divergent business models and growth rates are reshaping the AI landscape. The discussion covers Anthropic's dramatic revenue growth compared to OpenAI's strategic challenges, exploring questions about valuations, enterprise focus, and the infrastructure investments required to maintain competitive advantages in the AI race.
The hosts also address broader constraints facing the AI industry, including mounting opposition to data center construction, infrastructure bottlenecks, and alternative energy solutions emerging in response. They analyze current market valuations in light of physical growth limits and discuss the gap between AI's promise and reality in enterprise settings, where implementation challenges and organizational resistance continue to prevent many large companies from realizing profitable outcomes from their AI investments.

Sign up for Shortform to access the whole episode summary along with additional materials like counterarguments and context.
OpenAI and Anthropic are locked in a rivalry that's reshaping artificial intelligence. Their divergent growth trajectories and business models reveal who might emerge as the market leader.
While both companies reported roughly $30 billion in revenue at the beginning of Q2, Anthropic's 10x annual growth dramatically outpaces OpenAI's 3-4x growth. Anthropic skyrocketed from $1 billion to $10 billion last year and reached $30 billion by Q1, with projections of $80–100 billion by year's end. This logarithmic growth rate is unprecedented and positions Anthropic to rapidly overtake its rival.
Anthropic's explosive growth ties directly to its enterprise focus, particularly on coding applications. Their metered token pricing scales as businesses increase usage, creating robust revenue streams. In contrast, OpenAI prioritized the consumer segment with flat $20/month subscriptions that cap per-user revenue—only 3–4% of consumers buy premium plans, hindering overall scalability.
Investors are questioning OpenAI's $850 billion valuation, with secondary markets recently pricing Anthropic higher for the first time. Analysts note that OpenAI's last funding round requires an IPO at $1.2 trillion, yet there are no buyers at current levels. Additionally, Anthropic's $30 billion is primarily organic revenue, while OpenAI's figure includes $8 billion from revenue sharing with AI model providers, meaning Anthropic's real enterprise income is within 20% of OpenAI's on an adjusted basis.
OpenAI faces rising frustration over its lack of clear strategic focus. The company has been criticized for spreading efforts across consumer and enterprise markets instead of concentrating on high-value coding applications where the biggest growth and margins exist. Anonymous investors asked why OpenAI isn't fully capitalizing on its billion-user ChatGPT business, which still grows 50–100% yearly.
A leaked memo from OpenAI's Chief Revenue Officer acknowledged Anthropic's rapid rise and revealed a strategic pivot toward business customers and the agent platform layer. OpenAI's hire of Peter Steinberger from the OpenClaw open-source project is seen as an effort to capture talent and ensure future breakthroughs stay within OpenAI's ecosystem, potentially stifling external competition.
The next phase of the AI race hinges on profitable growth and controlling the infrastructure necessary for advanced compute. Travis Kalanick explains that network effects from scale in users, data, and revenue create near-insurmountable competitive advantages. Profitable growth allows for continual compute investment, which fuels machine learning advantages and locks in leadership.
Anthropic's funding model depends on revenue, while OpenAI relies on repeated massive capital raises like its recent $122 billion round. If Anthropic sustains 10x growth for another year or two, even OpenAI's capital war chest may prove insufficient if revenue and margins don't scale accordingly. The market is reaching an inflection point: hyper-growth funded by venture capital has limits, and eventually demands genuine profit and revenue sustainability. Anthropic's advantage lies in growing operating profits and margins, allowing reinvestment without new outside capital, while OpenAI's negative contribution margins risk becoming unsustainable.
The explosive demand for AI compute power is colliding with infrastructure constraints, regulatory headwinds, and public opposition that threaten both short-term progress and long-term competitiveness.
Chamath Palihapitiya reports that about 40% of contested data center projects are being canceled—more than double last year's rate—with $162 billion in economic value at stake. David Friedberg asserts that Americans increasingly view data centers as symbols of wealth concentration and elite-focused progress, representing "the temple of the wealthy" while most people see only marginal AI benefits like medical advice from chatbots.
The backlash is particularly intense in Democratic cities, with NIMBY groups blocking projects through regulatory capture and elections. Palihapitiya cites examples of city boards being ousted to reverse data center approvals, and entire states like Maine banning new construction. David Sacks notes that about 30 states may end up banning data centers as residents fear increased power costs without local benefits.
Tech billionaire-funded advocacy groups use climate and water arguments to oppose competitor data centers, with Sacks and Palihapitiya discussing how "doomer NIMBYism" is sometimes astroturfed to slow competition's AI progress.
Major AI labs face a pivotal decision: remain dependent on hyperscalers for compute or invest billions to build their own infrastructure. Palihapitiya calls the lack of proprietary compute a "five alarm fire," as inability to secure direct infrastructure access could throttle growth and revenue. Sacks notes that Anthropic strategically supported anti-data center sentiment to slow competitors while renting hyperscaler compute, but growth has now pushed them to the limits of third-party capacity.
Sacks and Palihapitiya suggest Anthropic delayed its Mythos model largely due to insufficient compute, instead concentrating resources on Opus 4.7. Building owned infrastructure offers priority compute access in a tightening market, but demands immense capital and faces NIMBY opposition and regulatory uncertainty. Since hyperscalers still control 60% of total compute, labs without proprietary access risk stalling at pivotal moments.
New models are emerging in response. Crusoe and CoreWeave are pioneering energy-independent data centers, bringing their own power on site. This "bring your own energy" model circumvents years-long grid interconnection waits and lessens community grid concerns. Bloom Energy is seeing meteoric stock growth providing clean, low-emission onsite power that accelerates permitting from years to months.
The "Ratepayer Protection Pledge" requires hyperscalers to supply their own power or support grid expansion, insulating residential customers from cost hikes. Meanwhile, Jason Calacanis highlights Elon Musk's $18 billion "Colossus" project with 555,000 GPUs as an unprecedented private compute investment. Palihapitiya and Sacks explain that overbuilding capacity provides privileged compute access for internal models while excess capacity becomes a revenue-generating asset—Musk has already begun renting surplus compute to companies like Cursor. Meta's Prometheus cluster, reaching 150,000 GPUs by 2026, will similarly prioritize internal AI development while competitors face allocation bottlenecks.
Current stock market valuations are driven by the AI narrative, yet significant questions remain about sustainability given unresolved ROI questions and physical growth constraints.
David Sacks draws comparisons to the late 1990s dot-com bubble, observing that today's market awards "crazy valuations" to AI firms regardless of real gross margins or capital demands. Chamath Palihapitiya points out that traditional value indicators like the Shiller PE ratio and Buffett index are peaking at all-time highs, but only eight or nine mega-cap companies are driving the S&P 500 toward record levels while the broader market stagnates, creating a fragile valuation structure.
Travis Kalanick argues that much of the market's movement is dictated by geopolitical signals, particularly presidential statements about conflict resolution like those concerning Iran, rather than direct AI productivity gains. Chamath highlights Warren Buffett's near $300 billion cash position as evidence of value investors' skepticism—while retail investors pile into AI-exposed equities, Buffett appears to be waiting for a correction.
David Sacks cautions that exponential scaling cannot continue indefinitely. As AI companies reach new scales, physical limitations like compute power, electricity, semiconductors, land, and data center real estate create significant 10x growth barriers. Sacks points to complaints that Claude began "thinking less"—returning shorter responses to conserve compute resources amid surging demand—as evidence that AI firms are hitting infrastructure limits. Although newer versions like Opus 4.7 have addressed some issues, these episodes illustrate mounting friction between AI's ambitions and infrastructure realities.
Large enterprises are struggling to realize profitable outcomes from AI investments, raising questions about whether AI can deliver transformative value at scale.
David Sacks references a major McKinsey study concluding that many enterprise AI transformation projects are failing due to change management challenges and organizational inertia. Travis Kalanick elaborates that resistance comes from entrenched middle managers, bureaucrats, and complex, often undocumented processes. Change management proves particularly tricky in companies where critical procedures haven't been fully mapped.
In contrast, small businesses and startups are demonstrating clear ROI in niche domains. Jason Calacanis provides examples like Micro One and TaxGPT, which are driving significant productivity in specific applications. However, Chamath Palihapitiya cautions that startup success doesn't prove AI's transformative power at enterprise scale. Enterprise customers are pushing back, rejecting inadequate AI-generated content and demanding greater accountability, with CTOs reporting exhausted budgets requiring strict justification for future AI expenditures.
Founder-led tech-native firms like Meta and Uber are experiencing accelerated feature rollouts, but Kalanick and Palihapitiya stress that technology isn't the main constraint—enterprise culture is the dominant bottleneck. AI agents today excel at automating clearly defined, repetitive tasks but struggle with novel problems or independent decision-making. Kalanick underscores that agents "aren't that smart yet" and often make basic logical errors. Both he and Calacanis agree that human oversight remains essential, as agents lack judgment and easily get lost in complex situations.
Despite years of promises, Palihapitiya notes that no AI application has yet demonstrated the productivity gains necessary to justify towering enterprise valuations. Unlike previous profitable tech waves like mobile, AI has yet to produce any scaled, consistently profitable business for enterprises. While niche startups and founder-led companies see operational improvements, there's a lack of compelling, scalable use cases that can fulfill predictions of multi-trillion dollar enterprise value.
1-Page Summary
OpenAI and Anthropic are locked in a fierce rivalry that is reshaping the landscape of artificial intelligence, with their divergent growth trajectories, business models, and funding strategies setting the stage for a consequential industry “flippening.” A deep dive into their revenue growth, market focus, valuations, and infrastructural advantages provides insight into who may emerge as the market leader.
While both OpenAI and Anthropic reported roughly $30 billion in revenue at the beginning of Q2, the underlying growth rates tell a different story. Anthropic has posted an extraordinary 10x annual revenue growth, skyrocketing from $1 billion to $10 billion last year and reaching $30 billion by Q1, with projections of $80–100 billion by year’s end if current trends continue. In contrast, OpenAI’s annualized growth hovers at 3-4x, requiring two years to decuple revenue, compared to Anthropic’s single year. This logarithmic growth rate, unprecedented even outside of tech, puts Anthropic in a position to rapidly overtake its rival.
Anthropic’s explosive growth is tied directly to a clear strategic focus on enterprise use cases, particularly coding. Their metered token pricing, akin to an electricity model, scales as businesses increase their usage, resulting in robust, scalable revenue streams. In comparison, OpenAI prioritized the consumer segment, where willingness to pay is lower—only about 3–4% of consumers are inclined to buy premium, typically at a flat $20/month “all you can eat” model, which caps per-user revenue and hinders overall scalability.
Investors are calling OpenAI’s $850 billion valuation into question, noting that secondary markets have recently priced Anthropic higher for the first time. Analysts argue that, for OpenAI's last funding round to make sense, the company now needs to IPO at $1.2 trillion—yet there are no buyers at the current $850 billion level. This reversal of investor confidence signals a shift towards Anthropic as the likely leader.
Anthropic’s $30 billion run rate is primarily organic, driven by direct product use. OpenAI’s reported $30 billion, however, is “cap inflated” by $8 billion due to revenue sharing and accounting with AI model providers and channel partners. On an apples-to-apples basis, Anthropic’s adjusted revenue is within 20% of OpenAI’s but represents real, usage-based enterprise income, not just platform licensing or shared model revenue.
OpenAI faces rising internal and investor frustration over its lack of clear strategic focus. The company has been criticized for spreading efforts thinly across both consumer and enterprise markets instead of zeroing in on high-value coding applications, where the biggest growth and margins reside.
Anonymous OpenAI investors voiced frustration, asking why the company isn’t fully capitalizing on its billion-user ChatGPT business, which still grows 50–100% yearly, instead of diluting focus with enterprise and coding initiatives. There is a call for OpenAI to devote resources to cementing ChatGPT’s leadership, rather than splitting attention.
A leaked memo from OpenAI’s Chief Revenue Officer acknowledged Anthropic’s rapid rise and critiqued their business model but also revealed OpenAI’s strategic pivot. The memo admits OpenAI previously lacked focus, but now plans to pursue business customers and win the agent platform layer, marking a shift to deeper enterprise engagement.
OpenAI’s hire of Peter Steinberger, the architect of the OpenClaw open-source project, is seen as a bid to capture talent and ensure future breakthroughs are integrated within OpenAI’s product ecosystem, potentially stifling external competition and open-source advancements.
The next phase of the AI race is defined by two things: who can achieve profitable, sustainable ...
Openai Vs Anthropic: Growth, Valuations, Revenue, and Ai Competitive Positioning
The explosive demand for artificial intelligence (AI) compute power is colliding with growing infrastructure and regulatory constraints, heightened public opposition, and strategic moves by leading labs and hyperscalers. America is witnessing a mounting crisis around AI data centers, with regulatory headwinds and social backlash threatening both short-term progress and long-term competitiveness.
Chamath Palihapitiya reports that out of every hundred data center projects contested, about 40 are canceled—a cancellation rate that is more than double last year's. The economic value at stake in these projects approaches $162 billion. This escalation is being fueled by shifting public sentiment: Americans are growing more negative toward AI, due to fears including job loss, wealth concentration, and the belief that technological progress benefits only the elite. David Friedberg asserts that data centers now represent a visible symbol of this perceived wealth and elite-focused progress, standing as “the temple of the wealthy” and a mechanism for tech billionaires to get ahead while the general population feels left behind. There is little broader consumer benefit felt from AI, with most people only seeing marginal improvements, such as medical advice from chatbots.
Many communities now view data centers as representing the tech elite’s interests, not public good. The backlash is particularly intense in Democratic cities, with local NIMBY (Not In My Backyard) groups blocking projects through regulatory capture and elections. Palihapitiya cites examples like a city board approving a $6 billion data center, only for half of its members to be ousted in order to reverse the decision. Entire states such as Maine have even passed laws banning new data center construction. Relocating these projects to states like Texas is not a simple fix, due to the lack of grid capacity and supporting infrastructure.
Local and state opposition has become a formidable barrier, often fueled by coordinated campaigns. David Sacks notes that about 30 states may end up banning data centers outright as residents fear that energy-hungry projects will increase power costs without offering local benefits.
Regulatory opposition is amplified by billionaire-funded advocacy groups that use environmental and resource arguments to galvanize support against new sites. Climate and water impacts are cited frequently, sometimes with limited factual basis. Sacks and Palihapitiya discuss how "doomer NIMBYism" is sometimes astroturfed, with well-funded tech advocacy groups using these arguments to slow the competition's AI progress while serving their own strategic interests.
Major AI labs such as OpenAI and Anthropic confront a pivotal decision: remain dependent on hyperscalers (Amazon, GCP, Azure) for compute, or invest billions to build and control their own infrastructure. Travis Kalanick refers to the reliance on hyperscalers as a troubling dependency. For labs now operating at scale, Palihapitiya calls the lack of proprietary compute a “five alarm fire,” as inability to secure direct access to infrastructure could throttle growth and revenue, not just product quality. These labs must secure land, power, and construction, which is “turning out to be impossible” due to regulatory and Nimby obstacles.
David Sacks notes that for years, Anthropic strategically supported anti-data center sentiment to slow competitors, relying instead on renting hyperscaler compute. Now, growth has pushed them to the limits of third-party capacity, forcing Anthropic to seek proprietary infrastructure. This shift means past opposition strategies may backfire as labs compete for scarce approvals and power.
Anthropic’s release strategy offers a prime example: Sacks and Palihapitiya suggest the lab delayed commercial launch of its Mythos model largely due to insufficient compute, instead concentrating resources on Opus 4.7. This generated scarcity-driven marketing buzz and let government buyers believe the holdback resulted from responsible restraint, but actual resource constraints played a substantial, perhaps decisive, role.
Building owned data center infrastructure offers competitive advantage—priority access to compute in an ever-tightening market. However, the capital outlay is immense and the pathway fraught with obstacles: capital requirements, slow permitting, ongoing Nimby resistance, and shifting regulatory landscapes. Palihapitiya warns that since hyperscalers still control 60% of total compute today, labs without proprietary access risk stalling at pivotal moments.
In response, new models are emerging. Crusoe and CoreWeave are pioneering energy-independent data centers, bringing their own power—be it natural gas, diesel, or solar—on site. This “bring your own energy” (BYOE) model circumvents years-long grid interconnection waits and lessens community concerns about grid stress.
These companies show the feasibility of rapid infrastructure expansion outside of legacy grid dependencies. By installing their own onsite power sources, they accelerate start times and provide buffer capacity for the surrounding region.
Infrastructure Constraints: Power Issues, Nimby, and Strategic Self-Built Infrastructure
The current stock market exhibits extreme valuations, largely driven by the explosive narrative around artificial intelligence (AI), yet significant questions remain about whether these highs are sustainable given unresolved questions about AI’s return on investment (ROI) and hard physical growth constraints.
David Sacks draws comparisons to the late 1990s dot-com bubble, when companies could inflate their valuations simply by associating with new technology, often without solid business fundamentals. He observes that, much like that era, today’s market awards “crazy valuations” to AI and technology firms, regardless of real gross margins or capital demands. The incremental cost for true software is near zero, but physical-world companies are also being priced as if they share the same economics, ignoring crucial structural differences.
Chamath Palihapitiya points out that traditional indicators of value, such as the Shiller Price/Earnings (PE) ratio and the Buffett index (the total value of all US equities relative to GDP), are both peaking at or near all-time highs. However, this does not reflect broad market strength: only a handful—about eight or nine—mega-cap companies are driving the S&P 500 toward all-time highs, creating a scenario of acute “dispersion” where the wider market stagnates. This highly skewed structure makes the current rally fragile.
Despite the S&P 500 nearing all-time highs and showing strong performance on the surface, Palihapitiya underscores that the index’s gains are mostly the product of a select few giants. Most other companies are not seeing comparable growth, which complicates the market picture and raises the risk that the current valuation structure is fragile and unsustainable.
Travis Kalanick argues that much of the market’s movement is dictated by policy responses and geopolitical signals, particularly during the Trump era. He describes Trump as using the stock market as a “weathervane,” maneuvering policy to buoy the S&P 500, particularly in the aftermath of events like war-related selloffs. Investors are now hyper-focused on presidential statements about conflict resolution, such as those concerning Iran, rather than on the direct productivity gains from AI. As Trump moves between stoking market anxiety and then reassuring investors through de-escalation and practical measures, seasoned traders have adapted to these cycles of volatility and recovery.
Chamath highlights Warren Buffett’s near $300 billion cash position as evidence of value investors’ skepticism. While retail investors pile into AI-exposed equities, Buffett’s strategy signals caution; he appears to be waiting for a correction, guided by indicators showing that equities are overvalued relative to the broader economy. The management shift at Berkshire Hathaway following Buffett’s reduced day-to-day influence is noted, but the underlying posture remains a significant indicator of traditional value investing’s read on the market’s sustainability.
David Sacks cautions that, despite the breathtaking revenue growth from leading AI firms like Anthropic, such exponential scaling cannot continue indefinitely. As AI companies reach new scales, physical limitations such as compute power, electricity, semiconductors, land, and data center real estate create significant 10x growth barriers. Sacks notes that Anthropic is already running into these constraints.
AI companies dependent on ever-expanding compute face stark realities: increasing model capability requires vast new resources, but securing enough electricity, semiconductor supply, and land for data centers is becoming an ever-greater challenge. This imposes hard ceilings on both speed and scale of development.
There is growing evidence of these limits in user experience. Sacks points to complaints that Anthropic’s Claude model began “thinking le ...
Market Valuations & Ai: Are Highs Justified Despite Unclear Roi?
Large enterprises are struggling to realize profitable outcomes from their AI investments, as highlighted by David Sacks and Travis Kalanick. Sacks references a major McKinsey study, concluding that many enterprise AI transformation projects are failing, primarily because of change management challenges and organizational inertia. Kalanick elaborates that, especially in big companies, resistance comes from entrenched layers of middle managers, technocrats, bureaucrats, and complex, often undocumented, processes. He emphasizes that change management is inherently a human, not strictly technological, struggle and proves particularly tricky in companies where critical procedures haven’t been fully mapped.
In contrast, small businesses and startups are demonstrating clear returns on investment from AI in niche domains. Jason Calacanis provides examples from his portfolio, such as Micro One, which leverages AI to enable dark pool data collection for large language model companies, and TaxGPT, an AI tool now used by six to seven percent of accountants, driving significant productivity in tax services. In these focused applications, AI is able to deliver market growth and operational efficiency.
However, Chamath Palihapitiya cautions that the success of startups and small players at the edges does not prove AI’s transformative power at enterprise scale. He contends that unless AI demonstrates effectiveness in high-leverage, complex enterprise use cases, its perceived value will remain largely theoretical. Meanwhile, enterprise customers are pushing back: they reject inadequate AI-generated content, demand greater accountability for spending, and CTOs report exhausted budgets while requiring strict justification for future AI expenditures.
Founder-led and tech-native firms like Meta and Uber serve as notable exceptions to the sluggish enterprise AI adoption curve. Calacanis observes that these companies are experiencing accelerated feature rollouts post-AI integration, attributing this speed to a culture that embraces, rather than resists, AI-native change.
Yet, Kalanick and Palihapitiya stress that technology is not the main constraint—enterprise culture is the dominant bottleneck. Resistance from middle managers and process-bound bureaucrats slows or even stalls AI deployments. Further compounding this, Kalanick points out, are gaps in legacy system documentation—many historical processes are only informally known, making AI deployment particularly difficult when explicit instructions and logic are absent.
AI agents today are effective at automating clearly defined, repetitive tasks, but they fall short when confronted with novel problems or when required to make decisions independently. Kalanick underscores the lack of sophistication in current agents, noting “they’re not that smart yet” and often make basic logical errors, such as taking conflicting positions on the same asset without explicit instruction. Both he and Calacanis agree that human oversight remains essential; AI agents require humans in the loop to provide direction, validation, an ...
Enterprise Ai: Turning Tools Into Revenue or Remaining Theoretical?
Download the Shortform Chrome extension for your browser
