In this episode of All-In with Chamath, Jason, Sacks & Friedberg, the hosts examine the recent lease agreement between Anthropic and Elon Musk, exploring how Musk's infrastructure play positions him as a major force in AI compute. The discussion covers Anthropic's exponential revenue growth and what that means for competition in the AI industry, including concerns about potential monopoly formation and the structural advantages that scale creates.
The hosts also address the White House's approach to AI regulation, clarifying misconceptions about an "FDA for AI" and discussing the balance between security and innovation. Beyond policy, they explore AI's measurable economic impact—from cloud provider growth to enterprise productivity gains—and tackle the growing public backlash against AI wealth concentration. The conversation concludes with proposals for distributing AI benefits more broadly, including opportunities in healthcare and education where regulation has historically limited innovation.

Sign up for Shortform to access the whole episode summary along with additional materials like counterarguments and context.
The recent lease agreement between Anthropic and Elon Musk represents a major shift in AI infrastructure. By leveraging SpaceX's Colossus data center and related assets, Musk has positioned X.ai to compete directly with cloud giants like AWS, Azure, and Google Cloud while addressing critical compute bottlenecks.
Musk's rapid build-out of data center and energy infrastructure has fundamentally changed AI compute power dynamics. SpaceX's Colossus and companion facilities—boasting over 1.2 gigawatts of capacity—enable X.ai to compete at hyperscale levels. Brad Gerstner describes SpaceX's "five layer cake" stack spanning launch, connectivity, compute, space data centers, and applications, creating vertical integration across space, energy, and compute. This positions Musk as a kingmaker in AI infrastructure.
X.ai's Elon Web Services (EWS) is projected by Gerstner to generate $4-5 billion annually, substantially offsetting investments in training Grok and building further capacity. This high-margin revenue stream reduces financial pressure on X.ai's R&D. Chamath Palihapitiya notes this terrestrial capacity provides a structural core business, easing valuation concerns tied to riskier orbital data center ventures.
Anthropic has faced severe compute and power constraints, limiting its growth. The new lease provides access to over 220,000 Nvidia GPUs and more than 300 megawatts of energy through Colossus. This immediately lifts restrictions: Claude users no longer face API rate limits, and paid Opus API volumes have dramatically increased. Anthropic's scaling was previously constrained not by demand but by limited compute supply.
For Musk, leasing spare Colossus capacity transforms idle infrastructure into revenue while reserving resources for X.ai's own model training. The deal illustrates an emerging cooperative dynamic—while Anthropic and X.ai compete fiercely on models, they collaborate on infrastructure. As Gerstner observes, this partnership approach differs from zero-sum competition and strengthens American AI's global position.
Looking ahead, Musk's vision extends to distributed infrastructure through Tesla Powerwalls with embedded compute hardware, GPU clusters, and Starlink connectivity in homes. Companies like Span and BasePower are already piloting co-located data centers in neighborhoods, creating distributed resource pools where households can contribute and monetize compute capacity. The ultimate trajectory points toward orbital data centers in space, enabled by SpaceX's launch capabilities, while terrestrial infrastructure provides critical near-term revenue and business stability.
Anthropic's annual recurring revenue (ARR) tripled from roughly $10 billion to $30 billion in the first four months of the year, then soared to $44 billion in April. Gerstner highlights that Anthropic and OpenAI together now generate $80 billion in revenue, up from $30 billion at year's start. Forecasts suggest Anthropic could reach $100 billion in ARR by end of 2024, with some projections showing $1 trillion by 2027—surpassing the combined revenue of current "Mag-7" tech giants.
This extraordinary growth, driven entirely by enterprise demand for coding and AI tools, is constrained only by supply—data center capacity and available power—not by market demand. Gerstner observes that if infrastructure could scale immediately, revenues would climb even more steeply.
David Sacks argues that Anthropic's trajectory echoes historical monopoly formation, drawing parallels to Standard Oil. He suggests that AI safety rhetoric can distract from monopoly concerns while companies push policies that strengthen their competitive moats. Once a company claims 80% of its market, Sacks notes, it's functionally a monopoly, and if Anthropic continues its exponential growth for 18 more months, it could reach an unprecedentedly dominant position.
Gerstner pushes back, arguing it's premature to call the current AI ecosystem a monopoly with only two companies generating substantial revenue. Yet competition faces structural disadvantages as Anthropic's compute, revenue, and network scale compound its lead. While OpenAI continues advancing with new models and architectures, the practical effect is that Anthropic's advantages become increasingly difficult to displace.
Recent reports suggest the White House is considering oversight measures for AI models. Jason Calacanis highlights media speculation about an "FDA for AI," but Brad Gerstner clarifies that after speaking with National Economic Council Director Kevin Hassett, the analogy was meant to describe coordination, not pre-approval authority. White House Chief of Staff Susie Wiles rejects the FDA-style approval model, reaffirming the administration's pro-innovation philosophy. David Sacks emphasizes the president is "the most pro innovation president we've ever had" and that the regulatory framework prioritizes cybersecurity and competitiveness, not blanket controls.
Industry leaders see know-your-customer (KYC) verification and API logging as effective tools for AI safety. Calacanis proposes KYC for controlling access to advanced models, while Gerstner confirms that major AI labs already monitor API usage and coordinate with government agencies to address threats. This "reasonable security" comes from industry self-governance and voluntary cooperation, not imposed approvals.
However, Sacks warns that some voices leverage "AI doomer" narratives to push for strict regulations that would entrench incumbents by burdening startups with compliance overhead. He argues that requiring pre-release government approval would slow U.S. innovation and cede global leadership to less-regulated competitors. Leaders agree that defending against cyber threats requires rapid, coordinated action between government and private sector, not top-down control. The U.S. cybersecurity industry's strength comes from agility and public-private cooperation, not restrictive approval regimes.
Major cloud providers are experiencing explosive growth reflecting enterprise AI demand. AWS grew 28% to $150 billion, Azure 39% to $108 billion, and Google Cloud 63% to $80 billion. These companies are expanding operating margins with minimal headcount growth—the MAG-5 combined saw only 3% headcount growth over three years. S&P 500 operating margins improved from 11.8% to 13%, suggesting substantial AI-driven efficiency gains.
As Sacks observes, sustained enterprise spending on AI-related coding tokens only happens if businesses are seeing demonstrable ROI. The rapid monetization indicates enterprises wouldn't invest unless there were immediate productivity benefits.
AI is actively cutting costs and driving measurable improvements. Companies like Nike and DoorDash use AI to generate product imagery, eliminating costly photo shoots while achieving double-digit improvements in advertising effectiveness. Startups especially benefit from AI coding tools, enabling small teams to accomplish what previously required much larger staffs. Calacanis notes firms can now ship products at speeds that would have required 22-person development teams, using far fewer resources.
The AI-driven economic boom faces a critical fork within 24-36 months. One path sees AI automation reducing operational expenses and workforce size, potentially causing social disruption. The alternate path sees AI boosting productivity and revenue while fostering new businesses, enhancing wider economic growth. Chamath Palihapitiya emphasizes that full realization of AI ROI is expected as traceability between AI spending and economic outcomes matures.
Contrary to predictions, the labor market remains robust. U.S. unemployment hovers around 4.2%, and recent college graduates are thriving due to their AI familiarity. Labor force participation sits at 61.9%, suggesting stability despite ongoing AI adoption.
Palihapitiya observes a profound shift in public sentiment toward AI, with negativity dominating reactions. Projects to increase energy supply for AI face heavy protest, with nearly half of planned capacity at risk due to backlash. Despite this, Sacks highlights that AI drives 75% of Q1 GDP growth and is fueling a blue-collar boom with construction wages rising 25-30%.
Yet AI's approval remains low, ranking only 29th out of 39 salient issues in polling. Palihapitiya grades tech leadership "D minus trending to F" in communicating AI's upside, arguing that failure to communicate positive benefits enables fear-mongering to dominate. Gerstner and Calacanis reinforce the need to better tell AI's story and deliver clear, broad-based benefits.
Trillion-dollar net worths accruing to a handful of founders spark public anxiety over inequality and power concentration. Palihapitiya underscores that the impression of a few controlling AI's future is driving backlash, intensifying calls for regulation to ensure AI-generated wealth benefits the many, not just a small elite.
The panel explores solutions including AI company IPOs voluntarily allocating 1-5% of shares to ordinary Americans via "Invest America" accounts. Calacanis suggests tech leaders pledge to donate 1% of holdings annually for 20 years to healthcare, education, and housing. Raising the minimum wage for tech-driven employers could also distribute AI productivity gains, increasing purchasing power without triggering inflation.
While AI has disrupted many sectors, healthcare and education remain under-innovated due to heavy regulation. Calacanis and Gerstner argue that founders view these areas as regulatory "kryptonite," deterring investment. AI presents an opportunity to address these challenges through tools like AI-powered health coaches, diagnostics, and tutoring that could lower costs and improve outcomes. The panelists call for a shift in tech and policy focus toward using AI to extend life, reduce suffering, and democratize access to quality services—changes that could significantly shift public sentiment and alleviate core anxieties about the future.
1-Page Summary
The recent lease agreement between Anthropic and Elon Musk marks a major transformation in the AI infrastructure landscape. By leveraging SpaceX's Colossus data center and related assets, Musk has positioned X.ai and his broader ecosystem to directly compete with cloud giants while solving critical compute bottlenecks in the current AI boom.
Elon Musk’s foresight in building out data center and energy infrastructure at unprecedented speed has fundamentally changed the balance of power in AI compute. SpaceX’s Colossus—and companion facilities MacroHard and MacroHarder, which together boast over 1.2 gigawatts of capacity—are now online, enabling X.ai to trade and train AI models at scale. This build-out vaults X.ai’s Elon Web Services (EWS) into the realm of hyperscalers, placing them shoulder-to-shoulder against established titans like Amazon Web Services, Google Cloud, and Azure. As Brad Gerstner describes, SpaceX now boasts a “five layer cake” stack: launch, connectivity, compute hyperscalers, space data centers, applications and models, and other bets. This vertical integration across space, energy, and compute positions Elon as a kingmaker in AI infrastructure.
The emergence of EWS brings a new high-margin revenue stream, projected by Gerstner to generate an additional $4 to $5 billion a year—an amount that materially offsets the substantial investments required to train X.ai’s Grok and build out further capacity. This incremental revenue, above already robust analyst estimates, means X.ai can sustain aggressive R&D and infrastructure investments without immediate pressure for commercial returns. Chamath Palihapitiya notes that this terrestrial capacity provides a structural core business for Elon, blunting valuation anxieties tied solely to future and riskier orbital data centers.
Elon saw the growing bottleneck in power and compute before most of the industry and executed rapidly to secure both. As a result, he has become the rare operator with the scale, energy assets, and infrastructure to support both his companies’ needs and those of the broader AI ecosystem. Gerstner underscores Elon’s unmatched ability to “convert electrons to tokens,” connecting expertise in battery deployment, solar, and gigafactories from Tesla and SolarCity with hyperscale compute operations.
Anthropic, like many AI companies, has faced severe compute and power constraints. The new lease agreement provides Anthropic access to over 220,000 Nvidia GPUs and more than 300 megawatts of energy, via Colossus. This immediately lifts prior restrictions: Claude users no longer face API rate limits or peak usage caps, and paid Opus API volumes have dramatically increased. Prior to this, Anthropic’s scaling and revenue growth were not constrained by demand but by limited supply of high-performance compute—especially power.
The leasing of spare Colossus capacity to Anthropic is a pivotal financial and strategic move. The “less connected, H100s, great for inference” assets are now monetized in a big way, alleviating financial pressure on X.ai, allowing it to invest in Grok’s development, and providing a profitable alternative to [restricted term] lying idle. This model lets X.ai remain a frontier AI lab without the burden of massive, unprofitable upfront capital commitments. Importantly, Elon reserves enough resources for X.ai’s needs, ensuring internal model training is never neglected.
A noteworthy theme is the cooperative dynamic emerging among AI labs. While Anthropic and X.ai remain fierce competitors in AI models, the infrastructure layer sees collaboration—frontier labs partner to build and share resources, diverging from a zero-sum mentality and instead shoring up global competitiveness for American AI. As Gerstner observes, this détente and infrastructure kinship is vital for maintainin ...
The Anthropic-Elon Compute Deal and Ai Infrastructure Strategy
In the first four months of the year, Anthropic’s annual recurring revenue (ARR) grew from roughly $10 billion to $30 billion, tripling before soaring again to $44 billion in April. Brad Gerstner highlights that at the year’s start, Anthropic and OpenAI together were producing about $30 billion in revenue, a figure that has now reached $80 billion in four months. Forecasts suggest Anthropic could 10x this year and end 2024 at roughly $100 billion in ARR. Some discussions even project Anthropic could reach $1 trillion in ARR by 2027—a trajectory that would surpass the combined annual revenue of the current "Mag-7" tech giants (Apple, Microsoft, Alphabet, Amazon, Nvidia, Meta, Tesla), which collectively generate around $2.3 to $2.35 trillion.
David Sacks and Chamath Palihapitiya note that while giants like Google grow at approximately 20% year-over-year, Anthropic’s exponential acceleration is unprecedented, growing at rates “not 100%, certainly not 1000%.” The potential is such that if this growth continues, Anthropic could eclipse the power, valuation, and influence of the largest tech conglomerates ever assembled, transforming it from a "Mag-7" world to a "Mag-1".
This extraordinary spike in revenue is driven entirely by enterprise demand, primarily for coding and AI tools. The market is currently absorbing all of Anthropic’s expanding output, especially for coding—indicating effectively unlimited total addressable market. Brad Gerstner observes that the only thing constraining revenue is supply (not demand), namely capacity at data centers and available power. If those could scale immediately, he believes revenues would climb even more steeply.
While Anthropic and OpenAI are both posting substantial revenues, observers like David Sacks argue that Anthropic’s recent growth trajectory echoes historical monopoly formation. Sacks draws a direct analogy to John D. Rockefeller and Standard Oil, noting that companies can mask monopolistic consolidation behind a facade of public benefit and safety, and distract regulators and the public with side issues. He posits that if Rockefeller had been better at public relations—branding as “Safe Oil” and advocating for government safety regulation—the public would have missed the consolidation underway, focused instead on debates over product safety. In the same vein, Sacks suggests today's AI safety rhetoric can serve to distract from the reality of monopoly formation and regulatory capture, as companies push policies that strengthen their moats and raise barriers to entry for competitors.
Gerstner pushes back, arguing it is premature to call the current AI ecosystem a monopoly when only two companies—Anthropic and OpenAI—generate substantial revenue and are still fledgling by the standards of legacy tech giants. Yet, Sacks counters that once a company claims 80% of its market, it’s functionally a monopoly, and if Anthropic continues its exponential trend for just 18 more months, it could find itself in such an unprecedentedly powerful position.
This raises concerns that, just like Standard Oil, Anthropic (with OpenAI) could soon dominate key technology infrastructure more thoroughly than any private enterprise in history, using regulatory and policy debates to shield itself from scrutiny.
Competition in the sector is theoretically possible. OpenAI is already pivoting its focus, advancing rapidly with new models like "5.5" based on its new Spud architecture, and offering strong competition ...
Anthropic's Exponential Growth and Monopoly Concerns
Recent reports suggest the White House is considering new oversight measures for artificial intelligence (AI) models, fueled by concerns over advanced AI such as Anthropic's Mythos model and the risks of AI-enabled cyberattacks. Jason Calacanis highlights media coverage speculating about an "FDA for AI," suggesting vetting and review for new models. However, Brad Gerstner clarifies after speaking with National Economic Council Director Kevin Hassett that the analogy to the FDA was intended to describe coordination, not an approval authority. Hassett and other officials, according to Gerstner and David Sacks, do not support a regime where every new model needs federal pre-approval before release. Instead, the media, not the administration, amplified the idea of FDA-style regulation, leading to confusion and a strong reaction in Silicon Valley.
White House Chief of Staff Susie Wiles further rejects the FDA-style approval model, reaffirming the administration’s pro-innovation philosophy. Sacks describes the president as "the most pro innovation president we've ever had" and stresses that the administration’s regulatory framework prioritizes specific legislative goals, such as cybersecurity and American competitiveness, not blanket controls or pre-release approval that could stifle innovation.
Industry leaders see know-your-customer (KYC) verification and API logging as effective, non-intrusive tools for AI safety. Jason Calacanis proposes KYC for controlling access to potent AI models, preventing malicious actors from exploiting advanced capabilities. Gerstner confirms that major AI labs already monitor API usage, track for suspicious or distillation activities, and coordinate extensively with government agencies to flag and address cyber threats. He notes the value in allowing certain uses to better understand extraction attempts and the nature of threats.
This “reasonable security” comes from robust industry self-governance and voluntary cooperation, not imposed approvals. Calacanis and Sacks emphasize self-policing among AI firms, who are motivated to set strong safeguards to avoid legal repercussions and reputational harm. The existing infrastructure relies on trust, transparency, and collaborative responses to incidents, rather than rigid government-imposed processes.
Some voices, Sacks argues, are leveraging "AI doomer" narratives and safety rhetoric to push for strict regulations that would entrench incumbent tech corporations by burdening startups with compliance overhead. He and Gerstner point out that requiring pre-release government approval would allow the state to pick AI winners and losers, slowing U.S. innovation and ceding global leadership to less-regulated competitors such as those in China. While cyber issues—such as the concerns raised by Anthropic’s Mythos—necessitate urgent system hardening in the short-term, these must not be used to permanently expand regulatory infrastructure in Washington.
Sacks warns that ideologues are using crises as opportunities to entrench regulatory capture, n ...
Ai Regulation and Government Oversight
AI technology is fundamentally reshaping business operations, efficiency, and the broader workforce. Recent data from hyperscalers and ongoing discussions among technology investors showcase the tangible and immediate economic impact, while revealing critical questions about the sustainability and direction of AI-driven productivity gains.
Major cloud providers—Amazon Web Services (AWS), Microsoft Azure, and Google Cloud—are experiencing explosive revenue growth that underscores enterprise demand for AI computing capacity. AWS grew 28% to $150 billion in revenue, Azure 39% to $108 billion, and Google Cloud an impressive 63% to $80 billion over the recent period. These companies are not only growing revenues but also expanding their operating margins, with minimal headcount growth: the MAG-5 (major tech companies) combined have only seen about 3% headcount growth over the last three years. Meanwhile, S&P 500 operating margins have improved significantly, rising from 11.8% in Q1 2024 to 13%, suggesting substantial AI-driven efficiency gains.
The sustained, month-over-month enterprise spending on AI-related coding tokens and tools is a crucial data point. As David Sacks observes, such ongoing investment only happens if enterprises are already seeing demonstrable ROI. The rapid monetization of AI tokens indicates that enterprises wouldn't be pouring in money unless there were immediate productivity benefits. This productivity is flowing through from core infrastructure, to AI models, to applications, finally reaching end users, fueling an economic boom.
AI is actively cutting costs and driving measurable improvements in business performance. Companies such as Nike and DoorDash use AI to generate product imagery and photographic assets, removing the need for costly and time-consuming photo shoots. The application of AI in ad creative produces assets at half the cost while achieving double-digit percentage improvements in advertising effectiveness.
Startups especially benefit from AI coding tools, enabling them to accomplish more with fewer employees and less capital. With the aid of AI agents and automation, small teams orchestrate what used to require much larger staffs. In practice, Jason Calacanis notes that firms can now ship products at speed that would have previously required 22-person development teams, using far fewer resources. Startups gain so much value from these AI tools and tokens that they halve their hiring, stretch investments further, and deliver faster.
The AI-driven economic boom faces a pivotal moment within the next two to three years. One possible path is that AI automation drives significant reductions in operational expenses (OpEx) and workforce size, leading to social disruption as margins increase mainly through cost-cutting. The alternate path sees OpEx stay stable (or even rise) as AI boosts productivity, revenue, and fosters new businesses and services, enhancing wider economic growth and living standards.
True transformation will require not only the upfront investment (“spending X”) but clear proof that AI adoption delivers tang ...
Ai's Economic Impact and Return On Investment
Chamath Palihapitiya observes a profound shift in public and political sentiment towards tech and particularly AI, with negativity dominating both community and political reactions. He notes that projects to increase energy supply for AI—like new datacenters—face heavy protest, with nearly half of planned capacity at risk due to public backlash, exacerbating supply constraints. Despite this, AI is driving significant economic progress: David Sacks highlights that AI is responsible for 75% of Q1 GDP growth, is fueling a construction and blue-collar boom, and is causing wage increases of 25-30% in construction.
Yet, AI’s approval remains low, and public concern does not yet rank highly among voters. Sacks cites polling showing that, despite extensive media focus on AI's risks, AI ranks only 29th out of 39 salient issues—far below concerns like healthcare and general economic welfare, which are top-of-mind for most people. The panel agrees that humans’ bias toward safety makes us hyper-aware of AI’s potential harms, from deepfakes to job loss and bioweapons.
A key factor amplifying the doom narrative is the failure of tech leaders to communicate the positive potential and realized societal benefits of AI. Palihapitiya grades tech industry leadership “D minus trending to F” in communicating AI’s upside, arguing that the lack of clear and compelling messaging on widespread social investment and uplifting American society enables negative perceptions and fear-mongering to fill the gap. Brad Gerstner and Jason Calacanis reinforce the need to better tell AI’s story and deliver clear, broad-based net benefits.
Trillion-dollar net worths accruing to a handful of founders and investors stoke public anxiety over wealth inequality and perceived power concentration. Palihapitiya underscores that the impression of a few individuals controlling the “keys” to the AI-driven future is driving backlash, with tech leaders coming across as untrustworthy or self-interested due to a lack of substantial reinvestment in society. This backdrop intensifies calls for regulation to ensure that AI-generated wealth benefits the many, rather than just consolidating among a small group at the top.
The negative sentiment, fueled by fear of job loss and socioeconomic polarization, leads to bipartisan regulatory momentum. Palihapitiya forecasts increasing oversight, whether under a Democratic or Republican administration, citing a widespread sense that AI’s economic rewards are not adequately shared with ordinary people.
The panel explores several solutions to address these equity concerns and diffuse the backlash. Gerstner and Calacanis propose that AI company IPOs could voluntarily allocate 1–5% of shares to ordinary Americans—via “Invest America” accounts—so citizens directly benefit and share in AI’s compounded growth. Calacanis suggests tech leaders could pledge to donate 1% of their holdings annually for the next 20 years to causes like healthcare, education, and housing, creating a broad-based reinvestment mechanism that does not sap entrepreneurial incentive.
Raising the minimum wage is also discussed as a way for te ...
Public Backlash Against AI Wealth Concentration
Download the Shortform Chrome extension for your browser
