Podcasts > All-In with Chamath, Jason, Sacks & Friedberg > OpenAI's Identity Crisis, Datacenter Wars, Market Up on Iran News, Mamdani's First Tax, Swalwell Out

OpenAI's Identity Crisis, Datacenter Wars, Market Up on Iran News, Mamdani's First Tax, Swalwell Out

By All-In Podcast, LLC

In this episode of All-In with Chamath, Jason, Sacks & Friedberg, the hosts examine the intensifying competition between OpenAI and Anthropic, analyzing how their divergent business models and growth rates are reshaping the AI landscape. The discussion covers Anthropic's dramatic revenue growth compared to OpenAI's strategic challenges, exploring questions about valuations, enterprise focus, and the infrastructure investments required to maintain competitive advantages in the AI race.

The hosts also address broader constraints facing the AI industry, including mounting opposition to data center construction, infrastructure bottlenecks, and alternative energy solutions emerging in response. They analyze current market valuations in light of physical growth limits and discuss the gap between AI's promise and reality in enterprise settings, where implementation challenges and organizational resistance continue to prevent many large companies from realizing profitable outcomes from their AI investments.

Listen to the original

OpenAI's Identity Crisis, Datacenter Wars, Market Up on Iran News, Mamdani's First Tax, Swalwell Out

This is a preview of the Shortform summary of the Apr 17, 2026 episode of the All-In with Chamath, Jason, Sacks & Friedberg

Sign up for Shortform to access the whole episode summary along with additional materials like counterarguments and context.

OpenAI's Identity Crisis, Datacenter Wars, Market Up on Iran News, Mamdani's First Tax, Swalwell Out

1-Page Summary

OpenAI vs Anthropic: The AI Race Reshapes

OpenAI and Anthropic are locked in a rivalry that's reshaping artificial intelligence. Their divergent growth trajectories and business models reveal who might emerge as the market leader.

Revenue Growth and Market Focus

While both companies reported roughly $30 billion in revenue at the beginning of Q2, Anthropic's 10x annual growth dramatically outpaces OpenAI's 3-4x growth. Anthropic skyrocketed from $1 billion to $10 billion last year and reached $30 billion by Q1, with projections of $80–100 billion by year's end. This logarithmic growth rate is unprecedented and positions Anthropic to rapidly overtake its rival.

Anthropic's explosive growth ties directly to its enterprise focus, particularly on coding applications. Their metered token pricing scales as businesses increase usage, creating robust revenue streams. In contrast, OpenAI prioritized the consumer segment with flat $20/month subscriptions that cap per-user revenue—only 3–4% of consumers buy premium plans, hindering overall scalability.

Investors are questioning OpenAI's $850 billion valuation, with secondary markets recently pricing Anthropic higher for the first time. Analysts note that OpenAI's last funding round requires an IPO at $1.2 trillion, yet there are no buyers at current levels. Additionally, Anthropic's $30 billion is primarily organic revenue, while OpenAI's figure includes $8 billion from revenue sharing with AI model providers, meaning Anthropic's real enterprise income is within 20% of OpenAI's on an adjusted basis.

Strategic Focus and Direction

OpenAI faces rising frustration over its lack of clear strategic focus. The company has been criticized for spreading efforts across consumer and enterprise markets instead of concentrating on high-value coding applications where the biggest growth and margins exist. Anonymous investors asked why OpenAI isn't fully capitalizing on its billion-user ChatGPT business, which still grows 50–100% yearly.

A leaked memo from OpenAI's Chief Revenue Officer acknowledged Anthropic's rapid rise and revealed a strategic pivot toward business customers and the agent platform layer. OpenAI's hire of Peter Steinberger from the OpenClaw open-source project is seen as an effort to capture talent and ensure future breakthroughs stay within OpenAI's ecosystem, potentially stifling external competition.

Infrastructure and Competitive Advantages

The next phase of the AI race hinges on profitable growth and controlling the infrastructure necessary for advanced compute. Travis Kalanick explains that network effects from scale in users, data, and revenue create near-insurmountable competitive advantages. Profitable growth allows for continual compute investment, which fuels machine learning advantages and locks in leadership.

Anthropic's funding model depends on revenue, while OpenAI relies on repeated massive capital raises like its recent $122 billion round. If Anthropic sustains 10x growth for another year or two, even OpenAI's capital war chest may prove insufficient if revenue and margins don't scale accordingly. The market is reaching an inflection point: hyper-growth funded by venture capital has limits, and eventually demands genuine profit and revenue sustainability. Anthropic's advantage lies in growing operating profits and margins, allowing reinvestment without new outside capital, while OpenAI's negative contribution margins risk becoming unsustainable.

Infrastructure Constraints and Strategic Responses

The explosive demand for AI compute power is colliding with infrastructure constraints, regulatory headwinds, and public opposition that threaten both short-term progress and long-term competitiveness.

Data Center Backlash

Chamath Palihapitiya reports that about 40% of contested data center projects are being canceled—more than double last year's rate—with $162 billion in economic value at stake. David Friedberg asserts that Americans increasingly view data centers as symbols of wealth concentration and elite-focused progress, representing "the temple of the wealthy" while most people see only marginal AI benefits like medical advice from chatbots.

The backlash is particularly intense in Democratic cities, with NIMBY groups blocking projects through regulatory capture and elections. Palihapitiya cites examples of city boards being ousted to reverse data center approvals, and entire states like Maine banning new construction. David Sacks notes that about 30 states may end up banning data centers as residents fear increased power costs without local benefits.

Tech billionaire-funded advocacy groups use climate and water arguments to oppose competitor data centers, with Sacks and Palihapitiya discussing how "doomer NIMBYism" is sometimes astroturfed to slow competition's AI progress.

Infrastructure Control and Competitive Implications

Major AI labs face a pivotal decision: remain dependent on hyperscalers for compute or invest billions to build their own infrastructure. Palihapitiya calls the lack of proprietary compute a "five alarm fire," as inability to secure direct infrastructure access could throttle growth and revenue. Sacks notes that Anthropic strategically supported anti-data center sentiment to slow competitors while renting hyperscaler compute, but growth has now pushed them to the limits of third-party capacity.

Sacks and Palihapitiya suggest Anthropic delayed its Mythos model largely due to insufficient compute, instead concentrating resources on Opus 4.7. Building owned infrastructure offers priority compute access in a tightening market, but demands immense capital and faces NIMBY opposition and regulatory uncertainty. Since hyperscalers still control 60% of total compute, labs without proprietary access risk stalling at pivotal moments.

Alternative Energy Solutions

New models are emerging in response. Crusoe and CoreWeave are pioneering energy-independent data centers, bringing their own power on site. This "bring your own energy" model circumvents years-long grid interconnection waits and lessens community grid concerns. Bloom Energy is seeing meteoric stock growth providing clean, low-emission onsite power that accelerates permitting from years to months.

The "Ratepayer Protection Pledge" requires hyperscalers to supply their own power or support grid expansion, insulating residential customers from cost hikes. Meanwhile, Jason Calacanis highlights Elon Musk's $18 billion "Colossus" project with 555,000 GPUs as an unprecedented private compute investment. Palihapitiya and Sacks explain that overbuilding capacity provides privileged compute access for internal models while excess capacity becomes a revenue-generating asset—Musk has already begun renting surplus compute to companies like Cursor. Meta's Prometheus cluster, reaching 150,000 GPUs by 2026, will similarly prioritize internal AI development while competitors face allocation bottlenecks.

Market Valuations and Growth Limits

Current stock market valuations are driven by the AI narrative, yet significant questions remain about sustainability given unresolved ROI questions and physical growth constraints.

Valuation Extremes

David Sacks draws comparisons to the late 1990s dot-com bubble, observing that today's market awards "crazy valuations" to AI firms regardless of real gross margins or capital demands. Chamath Palihapitiya points out that traditional value indicators like the Shiller PE ratio and Buffett index are peaking at all-time highs, but only eight or nine mega-cap companies are driving the S&P 500 toward record levels while the broader market stagnates, creating a fragile valuation structure.

Travis Kalanick argues that much of the market's movement is dictated by geopolitical signals, particularly presidential statements about conflict resolution like those concerning Iran, rather than direct AI productivity gains. Chamath highlights Warren Buffett's near $300 billion cash position as evidence of value investors' skepticism—while retail investors pile into AI-exposed equities, Buffett appears to be waiting for a correction.

Physical Growth Constraints

David Sacks cautions that exponential scaling cannot continue indefinitely. As AI companies reach new scales, physical limitations like compute power, electricity, semiconductors, land, and data center real estate create significant 10x growth barriers. Sacks points to complaints that Claude began "thinking less"—returning shorter responses to conserve compute resources amid surging demand—as evidence that AI firms are hitting infrastructure limits. Although newer versions like Opus 4.7 have addressed some issues, these episodes illustrate mounting friction between AI's ambitions and infrastructure realities.

Enterprise AI: Promise vs. Reality

Large enterprises are struggling to realize profitable outcomes from AI investments, raising questions about whether AI can deliver transformative value at scale.

Implementation Challenges

David Sacks references a major McKinsey study concluding that many enterprise AI transformation projects are failing due to change management challenges and organizational inertia. Travis Kalanick elaborates that resistance comes from entrenched middle managers, bureaucrats, and complex, often undocumented processes. Change management proves particularly tricky in companies where critical procedures haven't been fully mapped.

In contrast, small businesses and startups are demonstrating clear ROI in niche domains. Jason Calacanis provides examples like Micro One and TaxGPT, which are driving significant productivity in specific applications. However, Chamath Palihapitiya cautions that startup success doesn't prove AI's transformative power at enterprise scale. Enterprise customers are pushing back, rejecting inadequate AI-generated content and demanding greater accountability, with CTOs reporting exhausted budgets requiring strict justification for future AI expenditures.

Current Limitations

Founder-led tech-native firms like Meta and Uber are experiencing accelerated feature rollouts, but Kalanick and Palihapitiya stress that technology isn't the main constraint—enterprise culture is the dominant bottleneck. AI agents today excel at automating clearly defined, repetitive tasks but struggle with novel problems or independent decision-making. Kalanick underscores that agents "aren't that smart yet" and often make basic logical errors. Both he and Calacanis agree that human oversight remains essential, as agents lack judgment and easily get lost in complex situations.

Despite years of promises, Palihapitiya notes that no AI application has yet demonstrated the productivity gains necessary to justify towering enterprise valuations. Unlike previous profitable tech waves like mobile, AI has yet to produce any scaled, consistently profitable business for enterprises. While niche startups and founder-led companies see operational improvements, there's a lack of compelling, scalable use cases that can fulfill predictions of multi-trillion dollar enterprise value.

1-Page Summary

Additional Materials

Counterarguments

  • While Anthropic's reported revenue growth is impressive, such rapid scaling is often difficult to sustain over multiple years, and projections of $80–100 billion by year-end may be overly optimistic given potential market saturation and operational challenges.
  • OpenAI's consumer focus, while limiting per-user revenue, has enabled it to build a massive user base and brand recognition, which can be leveraged for future monetization strategies beyond flat subscriptions.
  • The assertion that only 3–4% of consumers buy premium plans does not account for potential upselling, cross-selling, or future product tiers that could increase average revenue per user.
  • Secondary market valuations can be volatile and may not accurately reflect the long-term value or stability of either company.
  • OpenAI's diversified approach across consumer and enterprise markets could be seen as a risk mitigation strategy, reducing dependence on a single revenue stream or market segment.
  • The claim that Anthropic's revenue is primarily organic and OpenAI's includes $8 billion from revenue sharing does not necessarily diminish the value of OpenAI's partnerships or ecosystem, which can be strategic assets.
  • The criticism of OpenAI's lack of strategic focus may overlook the potential benefits of experimentation and adaptability in a rapidly evolving industry.
  • The narrative that Anthropic's enterprise focus is inherently superior does not consider that consumer AI adoption could drive broader societal impact and long-term market opportunities.
  • Infrastructure constraints and regulatory headwinds affect all major AI companies, not just OpenAI or Anthropic, and both firms face similar risks in scaling compute capacity.
  • The comparison of current AI valuations to the dot-com bubble may be overstated, as some AI firms have demonstrated real revenue growth and product adoption, unlike many dot-com era companies.
  • The lack of transformative enterprise AI use cases to date does not preclude their emergence in the near future, as technology adoption cycles often take years to reach maturity.
  • The challenges of enterprise AI adoption are not unique to AI and have been observed with previous technology waves, such as cloud computing and mobile, which eventually overcame similar barriers.
  • The focus on negative contribution margins for OpenAI does not account for the possibility that early-stage losses are common in high-growth tech companies and may be part of a deliberate long-term investment strategy.
  • The assertion that AI agents are not yet sufficiently intelligent and require human oversight is accurate, but ongoing rapid advancements in AI research could address these limitations sooner than expected.
  • The emphasis on NIMBY opposition and regulatory capture may understate the legitimate environmental and community concerns associated with large-scale data center construction.

Actionables

  • you can track and compare the pricing models of different AI tools you use, then switch to those with metered or usage-based pricing to better align your spending with actual value and avoid overpaying for flat-rate subscriptions that may not scale with your needs; for example, if you only use an AI writing tool occasionally, opt for one that charges per word or per use rather than a monthly fee.
  • a practical way to anticipate and adapt to infrastructure constraints is to monitor local news for data center developments in your area and adjust your digital habits accordingly, such as scheduling large file uploads or AI-powered tasks during off-peak hours to avoid slowdowns or outages that may result from community opposition or grid strain.
  • you can evaluate the effectiveness of AI-powered features in your daily tools by keeping a simple log of when these features save you time or cause errors, then use this record to decide whether to continue using, replace, or disable certain AI functions, ensuring your workflow benefits from automation without sacrificing quality or oversight.

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
OpenAI's Identity Crisis, Datacenter Wars, Market Up on Iran News, Mamdani's First Tax, Swalwell Out

Openai Vs Anthropic: Growth, Valuations, Revenue, and Ai Competitive Positioning

OpenAI and Anthropic are locked in a fierce rivalry that is reshaping the landscape of artificial intelligence, with their divergent growth trajectories, business models, and funding strategies setting the stage for a consequential industry “flippening.” A deep dive into their revenue growth, market focus, valuations, and infrastructural advantages provides insight into who may emerge as the market leader.

Anthropic's 10x Revenue Growth Outpaces Openai's 3-4x, Hinting At Market Leadership Despite Lower Revenue

While both OpenAI and Anthropic reported roughly $30 billion in revenue at the beginning of Q2, the underlying growth rates tell a different story. Anthropic has posted an extraordinary 10x annual revenue growth, skyrocketing from $1 billion to $10 billion last year and reaching $30 billion by Q1, with projections of $80–100 billion by year’s end if current trends continue. In contrast, OpenAI’s annualized growth hovers at 3-4x, requiring two years to decuple revenue, compared to Anthropic’s single year. This logarithmic growth rate, unprecedented even outside of tech, puts Anthropic in a position to rapidly overtake its rival.

Anthropic's Enterprise Coding and Metered Token Pricing Offers Scalable Revenue, Unlike Openai's Consumer Subscription Model With Lower Per-user Revenue

Anthropic’s explosive growth is tied directly to a clear strategic focus on enterprise use cases, particularly coding. Their metered token pricing, akin to an electricity model, scales as businesses increase their usage, resulting in robust, scalable revenue streams. In comparison, OpenAI prioritized the consumer segment, where willingness to pay is lower—only about 3–4% of consumers are inclined to buy premium, typically at a flat $20/month “all you can eat” model, which caps per-user revenue and hinders overall scalability.

Openai's $850b Valuation Overvalued Vs Anthropic; Secondary Market Indicates Higher Investor Confidence in Anthropic

Investors are calling OpenAI’s $850 billion valuation into question, noting that secondary markets have recently priced Anthropic higher for the first time. Analysts argue that, for OpenAI's last funding round to make sense, the company now needs to IPO at $1.2 trillion—yet there are no buyers at the current $850 billion level. This reversal of investor confidence signals a shift towards Anthropic as the likely leader.

Revenue Composition: Anthropic's $30b Is Organic, Openai's $30b Includes ai Model Revenue Sharing

Anthropic’s $30 billion run rate is primarily organic, driven by direct product use. OpenAI’s reported $30 billion, however, is “cap inflated” by $8 billion due to revenue sharing and accounting with AI model providers and channel partners. On an apples-to-apples basis, Anthropic’s adjusted revenue is within 20% of OpenAI’s but represents real, usage-based enterprise income, not just platform licensing or shared model revenue.

Openai Criticized For Lacking Focus, Pursuing Consumer and Enterprise Markets Over High-Value Coding

OpenAI faces rising internal and investor frustration over its lack of clear strategic focus. The company has been criticized for spreading efforts thinly across both consumer and enterprise markets instead of zeroing in on high-value coding applications, where the biggest growth and margins reside.

Investors Urge Openai to Prioritize Chatgpt Growth Over Enterprise and Code Applications

Anonymous OpenAI investors voiced frustration, asking why the company isn’t fully capitalizing on its billion-user ChatGPT business, which still grows 50–100% yearly, instead of diluting focus with enterprise and coding initiatives. There is a call for OpenAI to devote resources to cementing ChatGPT’s leadership, rather than splitting attention.

Internal Memo From Openai's Chief Revenue Officer Critiques Anthropic, Outlines Pivot Toward Business Customers, and Acknowledges Past Misdirection

A leaked memo from OpenAI’s Chief Revenue Officer acknowledged Anthropic’s rapid rise and critiqued their business model but also revealed OpenAI’s strategic pivot. The memo admits OpenAI previously lacked focus, but now plans to pursue business customers and win the agent platform layer, marking a shift to deeper enterprise engagement.

Openai's Bid For Openclaw's Peter Steinberg Suggests an Effort to Stifle Competition and Direct Future Innovations Into Openai Products

OpenAI’s hire of Peter Steinberger, the architect of the OpenClaw open-source project, is seen as a bid to capture talent and ensure future breakthroughs are integrated within OpenAI’s product ecosystem, potentially stifling external competition and open-source advancements.

Frontier Ai Labs' Race Hinges on Profitable Growth and Maintaining Compute Advantage Via Infrastructure Control

The next phase of the AI race is defined by two things: who can achieve profitable, sustainable ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Openai Vs Anthropic: Growth, Valuations, Revenue, and Ai Competitive Positioning

Additional Materials

Clarifications

  • "10x annual revenue growth" means a company's revenue increases tenfold within one year. "Decuple revenue" means increasing revenue by ten times, but the time frame can vary. In this context, Anthropic grows its revenue ten times in one year, while OpenAI takes about two years to do the same. Faster growth implies quicker market expansion and stronger competitive positioning.
  • Metered token pricing charges customers based on actual usage, like paying for electricity consumed, allowing costs to scale with demand. Each "token" represents a unit of AI processing, so more tokens used means higher fees. Subscription models charge a fixed fee regardless of usage, limiting revenue growth per user. Metered pricing aligns cost with value delivered, benefiting enterprise clients with variable needs.
  • AI model revenue sharing means that a company splits the income generated from AI services with other parties who provide or support the AI models. Channel partnerships involve collaborating with external companies that help distribute or sell the AI products, often sharing revenue in return. These arrangements can inflate reported revenue because the company counts shared income rather than only direct sales. This contrasts with purely organic revenue, which comes directly from end-user consumption without such splits.
  • A company's valuation is an estimate of its market worth based on factors like revenue, growth, and investor demand. Being "overvalued" means the valuation is higher than what the company's financial performance and prospects justify. An IPO valuation is the price at which a company offers its shares to the public, reflecting investor expectations. If a company needs a higher IPO valuation than current market interest supports, it may struggle to attract buyers or raise capital.
  • Secondary markets allow investors to buy and sell shares of private companies before an IPO. Prices in these markets reflect real-time investor sentiment and perceived company value. A higher price for Anthropic in secondary markets signals stronger investor confidence compared to OpenAI. This can influence future funding rounds and public market valuations.
  • Organic revenue refers to income generated directly from a company’s core business activities without external adjustments. "Cap inflated" revenue includes additional amounts from accounting practices, such as revenue sharing or partner contributions, that may exaggerate the reported total. This can make a company’s revenue appear higher than the actual cash earned from direct sales or services. Understanding this distinction helps assess the true financial health and growth of a business.
  • Focusing on enterprise use cases targets businesses that often require large-scale, specialized AI solutions, leading to higher and more predictable revenue per customer. Consumer segments typically involve many users paying smaller amounts individually, resulting in lower per-user revenue and less predictable income. Enterprises also tend to have longer-term contracts and greater willingness to pay for scalable, customized services. This makes enterprise focus more conducive to sustainable, scalable growth and profitability.
  • Negative contribution margins occur when the cost to produce and deliver a product exceeds the revenue generated from it. This means each sale actually increases losses rather than profits. Sustaining this is unsustainable because it drains cash reserves and requires continuous external funding. Eventually, a company must achieve positive margins to maintain long-term financial health.
  • Network effects occur when a product or service becomes more valuable as more people use it. This creates a positive feedback loop, attracting even more users and increasing the platform's dominance. Competitors struggle to catch up because the established network offers greater utility and data advantages. In AI, larger user bases improve model training and data quality, reinforcing market leadership.
  • Controlling infrastructure for advanced compute means owning or managing the powerful hardware and data centers needed to run large AI models efficiently. This control reduces reliance on third-party providers, lowering costs and improving performance. It also enables faster innovation by customizing systems specifically for AI workloads. Ultimately, it creates a competitive barrier, as rivals must invest heavily to match this ...

Counterarguments

  • Revenue growth rates, while impressive, may not be sustainable over multiple years for either company due to market saturation, competition, or macroeconomic factors.
  • OpenAI’s consumer focus, particularly with ChatGPT, has resulted in massive user adoption and brand recognition, which can translate into long-term value and network effects not immediately reflected in revenue growth rates.
  • The flat subscription model can provide predictable, recurring revenue and lower customer acquisition costs, which are valuable for long-term planning and stability.
  • Valuations in secondary markets can be volatile and influenced by short-term sentiment rather than long-term fundamentals or actual financial performance.
  • OpenAI’s diversified approach across consumer and enterprise markets may reduce risk by not relying solely on one segment, potentially providing resilience if one market underperforms.
  • The inclusion of revenue from partnerships and model licensing is a common practice in the tech industry and can represent legitimate business growth, not just “cap inflation.”
  • Investor confidence can shift rapidly and may not always accurately predict future market leadership or technological superiority.
  • OpenAI’s hiring of top talent from o ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
OpenAI's Identity Crisis, Datacenter Wars, Market Up on Iran News, Mamdani's First Tax, Swalwell Out

Infrastructure Constraints: Power Issues, Nimby, and Strategic Self-Built Infrastructure

The explosive demand for artificial intelligence (AI) compute power is colliding with growing infrastructure and regulatory constraints, heightened public opposition, and strategic moves by leading labs and hyperscalers. America is witnessing a mounting crisis around AI data centers, with regulatory headwinds and social backlash threatening both short-term progress and long-term competitiveness.

Data Center Projects Face 40% Cancellation Amid Political and Regulatory Challenges, Public Opposition Threatens Ai Infrastructure

Chamath Palihapitiya reports that out of every hundred data center projects contested, about 40 are canceled—a cancellation rate that is more than double last year's. The economic value at stake in these projects approaches $162 billion. This escalation is being fueled by shifting public sentiment: Americans are growing more negative toward AI, due to fears including job loss, wealth concentration, and the belief that technological progress benefits only the elite. David Friedberg asserts that data centers now represent a visible symbol of this perceived wealth and elite-focused progress, standing as “the temple of the wealthy” and a mechanism for tech billionaires to get ahead while the general population feels left behind. There is little broader consumer benefit felt from AI, with most people only seeing marginal improvements, such as medical advice from chatbots.

Public Sentiment Increasingly Views Data Centers As Symbols of Wealth Concentration and Elite-Focused Technological Progress

Many communities now view data centers as representing the tech elite’s interests, not public good. The backlash is particularly intense in Democratic cities, with local NIMBY (Not In My Backyard) groups blocking projects through regulatory capture and elections. Palihapitiya cites examples like a city board approving a $6 billion data center, only for half of its members to be ousted in order to reverse the decision. Entire states such as Maine have even passed laws banning new data center construction. Relocating these projects to states like Texas is not a simple fix, due to the lack of grid capacity and supporting infrastructure.

Democratic Cities and Nimby Groups Block Data Centers Through Denial and Regulatory Capture

Local and state opposition has become a formidable barrier, often fueled by coordinated campaigns. David Sacks notes that about 30 states may end up banning data centers outright as residents fear that energy-hungry projects will increase power costs without offering local benefits.

Tech Billionaire-Funded Advocacy Groups Use Climate and Water Arguments to Oppose Competitor Data Center Construction and Slow Ai Development

Regulatory opposition is amplified by billionaire-funded advocacy groups that use environmental and resource arguments to galvanize support against new sites. Climate and water impacts are cited frequently, sometimes with limited factual basis. Sacks and Palihapitiya discuss how "doomer NIMBYism" is sometimes astroturfed, with well-funded tech advocacy groups using these arguments to slow the competition's AI progress while serving their own strategic interests.

Frontier Labs Face a Choice: Accept Hyperscaler Dependency or Invest Billions in Infrastructure, With Competitive Implications

Major AI labs such as OpenAI and Anthropic confront a pivotal decision: remain dependent on hyperscalers (Amazon, GCP, Azure) for compute, or invest billions to build and control their own infrastructure. Travis Kalanick refers to the reliance on hyperscalers as a troubling dependency. For labs now operating at scale, Palihapitiya calls the lack of proprietary compute a “five alarm fire,” as inability to secure direct access to infrastructure could throttle growth and revenue, not just product quality. These labs must secure land, power, and construction, which is “turning out to be impossible” due to regulatory and Nimby obstacles.

Anthropic and Openai Must Shift From Renting Compute To Controlling Their Own Data Centers to Avoid Revenue Bottlenecks

David Sacks notes that for years, Anthropic strategically supported anti-data center sentiment to slow competitors, relying instead on renting hyperscaler compute. Now, growth has pushed them to the limits of third-party capacity, forcing Anthropic to seek proprietary infrastructure. This shift means past opposition strategies may backfire as labs compete for scarce approvals and power.

Anthropic's Decision to Hold Back Mythos Likely Reflected Compute Constraints, Reserving Resources For Opus 4.7 Launch and Marketing Impact Through Restraint

Anthropic’s release strategy offers a prime example: Sacks and Palihapitiya suggest the lab delayed commercial launch of its Mythos model largely due to insufficient compute, instead concentrating resources on Opus 4.7. This generated scarcity-driven marketing buzz and let government buyers believe the holdback resulted from responsible restraint, but actual resource constraints played a substantial, perhaps decisive, role.

Building Proprietary Infrastructure Offers a Competitive Edge With Priority Compute Access but Demands Substantial Capital and Faces Nimby Opposition and Regulatory Uncertainty

Building owned data center infrastructure offers competitive advantage—priority access to compute in an ever-tightening market. However, the capital outlay is immense and the pathway fraught with obstacles: capital requirements, slow permitting, ongoing Nimby resistance, and shifting regulatory landscapes. Palihapitiya warns that since hyperscalers still control 60% of total compute today, labs without proprietary access risk stalling at pivotal moments.

Alternative Energy Partnerships Address Data Center Constraints and Community Grid Concerns

In response, new models are emerging. Crusoe and CoreWeave are pioneering energy-independent data centers, bringing their own power—be it natural gas, diesel, or solar—on site. This “bring your own energy” (BYOE) model circumvents years-long grid interconnection waits and lessens community concerns about grid stress.

Crusoe and Core Weave Pioneer Energy-Independent Data Centers

These companies show the feasibility of rapid infrastructure expansion outside of legacy grid dependencies. By installing their own onsite power sources, they accelerate start times and provide buffer capacity for the surrounding region.

Bloom Energy & Peers See Rapid Valuation Growth With Onsite Energy Solutions, Cutting Permitting Timelines From Years to M ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Infrastructure Constraints: Power Issues, Nimby, and Strategic Self-Built Infrastructure

Additional Materials

Clarifications

  • NIMBY stands for "Not In My Backyard," describing local opposition to developments perceived as undesirable nearby. It often arises from concerns about environmental impact, property values, or community character. This opposition can influence local elections and regulatory decisions, effectively blocking or delaying projects. NIMBYism reflects a tension between broader societal benefits and local interests.
  • Hyperscalers are large cloud service providers like Amazon Web Services, Google Cloud Platform, and Microsoft Azure that operate massive data centers with vast computing resources. They offer on-demand AI compute power to companies without requiring them to build their own infrastructure. Their control over most of the available compute capacity gives them significant influence over AI development timelines and costs. Dependency on hyperscalers can limit AI labs' growth and strategic flexibility.
  • "Compute" refers to the processing power required to train and run AI models, typically provided by specialized hardware like GPUs. It is critical because more compute enables larger, more complex models that perform better and learn faster. Limited compute restricts AI development speed and innovation, making access to it a competitive advantage. Controlling compute resources ensures priority and scalability for AI projects.
  • Regulatory capture occurs when regulatory agencies are dominated by the industries they are supposed to regulate, leading to decisions that favor industry interests over the public good. This can happen through lobbying, revolving door employment, or political influence. In data center approvals, capture means local boards may prioritize tech companies' desires, ignoring community concerns. As a result, projects can be approved or blocked based on political power rather than objective criteria.
  • Astroturfing is the practice of creating fake grassroots movements to give the appearance of widespread public support or opposition. Advocacy groups use it to manipulate public opinion and influence policy by masking their true sponsors. This tactic can distort genuine community concerns by amplifying certain voices while silencing others. In the AI data center context, it means some opposition may be orchestrated by well-funded interests rather than local residents.
  • Renting compute from hyperscalers means using cloud services owned by companies like Amazon or Google, paying for access to their shared data centers and hardware. Owning proprietary data center infrastructure involves building and operating your own physical facilities and servers, giving full control over hardware, power, and capacity. This ownership allows guaranteed resource availability and customization but requires massive upfront investment and ongoing maintenance. Renting offers flexibility and lower initial costs but risks limited access and dependency on third-party priorities.
  • Owning data centers gives AI labs direct control over hardware, reducing reliance on third-party providers who may limit access or raise costs. It enables faster innovation by prioritizing compute resources for their own projects without competing for capacity. Proprietary infrastructure also enhances data security and customization tailored to specific AI workloads. This control is crucial as AI models grow larger and require more consistent, scalable compute power.
  • The "Ratepayer Protection Pledge" is a policy commitment to prevent increased electricity costs for residential customers caused by large industrial users like data centers. It requires these users to either generate their own power or fund grid upgrades, so utilities don't pass extra costs onto households. This helps maintain affordable energy bills for the general public while supporting infrastructure growth. The pledge balances economic development with consumer protection in energy policy.
  • Grid interconnection involves connecting a data center to the local electrical grid, which requires extensive infrastructure upgrades and regulatory approvals. These processes can take years due to capacity limits, environmental reviews, and coordination with utilities. Onsite power generation bypasses these delays by producing electricity directly at the data center, ensuring reliable and immediate energy supply. This approach also reduces strain on local grids and mitigates risks of power cost increases for nearby residents.
  • Elon Musk’s "Colossus" project represents one of the largest private AI compute investments ever made, signaling a major leap in processing power. Having 555,000 GPUs means the facility can perform massive parallel computations, essential for training complex AI models faster and at greater scale. This scale enables handling vast datasets and more sophisticated AI tasks that smaller setups cannot manage efficiently. Such capacity also creates a competitive advantage by ensuring exclusive access to powerful compute resources.
  • An "overbuild strategy" means investing in more computing capacity than immediately needed to ensure constant availability. This excess capacity ...

Counterarguments

  • While public opposition and NIMBYism are cited as major barriers, some communities have welcomed data centers for the jobs, tax revenue, and economic development they bring, especially in rural or economically struggling areas.
  • The claim that most consumers see only marginal benefits from AI overlooks significant advances in areas like accessibility, healthcare diagnostics, language translation, and fraud detection, which have broad societal impact.
  • The assertion that billionaire-funded advocacy groups use environmental arguments with "limited factual basis" may understate legitimate concerns about water usage, carbon emissions, and local environmental impacts, which are documented in several regions.
  • Not all opposition to data centers is astroturfed or driven by competitive interests; many grassroots groups have genuine concerns about local environmental and quality-of-life impacts.
  • The focus on proprietary infrastructure as the only path to competitive advantage may ignore the potential for collaborative or shared infrastructure models, which could reduce costs and environmental impact.
  • The narrative that regulatory and NIMBY obstacles are the primary bottlenecks may underplay technical, supply chain, and skilled labor shortages that also constrain data center development.
  • The Ratepayer Protection Pledge and similar policies can be seen as positive steps to ensure that large-scale industrial users do not unfairly burden residential ratepayers, addressing a legitima ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
OpenAI's Identity Crisis, Datacenter Wars, Market Up on Iran News, Mamdani's First Tax, Swalwell Out

Market Valuations & Ai: Are Highs Justified Despite Unclear Roi?

The current stock market exhibits extreme valuations, largely driven by the explosive narrative around artificial intelligence (AI), yet significant questions remain about whether these highs are sustainable given unresolved questions about AI’s return on investment (ROI) and hard physical growth constraints.

Equity Valuations Hit Extremes; Debate Over Ai Productivity Justifying High Prices Persists

David Sacks draws comparisons to the late 1990s dot-com bubble, when companies could inflate their valuations simply by associating with new technology, often without solid business fundamentals. He observes that, much like that era, today’s market awards “crazy valuations” to AI and technology firms, regardless of real gross margins or capital demands. The incremental cost for true software is near zero, but physical-world companies are also being priced as if they share the same economics, ignoring crucial structural differences.

Chamath Palihapitiya points out that traditional indicators of value, such as the Shiller Price/Earnings (PE) ratio and the Buffett index (the total value of all US equities relative to GDP), are both peaking at or near all-time highs. However, this does not reflect broad market strength: only a handful—about eight or nine—mega-cap companies are driving the S&P 500 toward all-time highs, creating a scenario of acute “dispersion” where the wider market stagnates. This highly skewed structure makes the current rally fragile.

S&p 500 Nears All-time Highs; 8-9 Mega-Cap Companies Drive Gains, Broader Market Stagnates, Creating Fragile Valuation Structure

Despite the S&P 500 nearing all-time highs and showing strong performance on the surface, Palihapitiya underscores that the index’s gains are mostly the product of a select few giants. Most other companies are not seeing comparable growth, which complicates the market picture and raises the risk that the current valuation structure is fragile and unsustainable.

Travis Kalanick argues that much of the market’s movement is dictated by policy responses and geopolitical signals, particularly during the Trump era. He describes Trump as using the stock market as a “weathervane,” maneuvering policy to buoy the S&P 500, particularly in the aftermath of events like war-related selloffs. Investors are now hyper-focused on presidential statements about conflict resolution, such as those concerning Iran, rather than on the direct productivity gains from AI. As Trump moves between stoking market anxiety and then reassuring investors through de-escalation and practical measures, seasoned traders have adapted to these cycles of volatility and recovery.

Warren Buffett's $300 Billion Cash Position Indicates Value Investors' Caution About Market Sustainability Despite Retail Enthusiasm For Ai Exposure

Chamath highlights Warren Buffett’s near $300 billion cash position as evidence of value investors’ skepticism. While retail investors pile into AI-exposed equities, Buffett’s strategy signals caution; he appears to be waiting for a correction, guided by indicators showing that equities are overvalued relative to the broader economy. The management shift at Berkshire Hathaway following Buffett’s reduced day-to-day influence is noted, but the underlying posture remains a significant indicator of traditional value investing’s read on the market’s sustainability.

Frontier Ai Models Encounter Growth Limits Due to Compute Scalability and Infrastructure Constraints

David Sacks cautions that, despite the breathtaking revenue growth from leading AI firms like Anthropic, such exponential scaling cannot continue indefinitely. As AI companies reach new scales, physical limitations such as compute power, electricity, semiconductors, land, and data center real estate create significant 10x growth barriers. Sacks notes that Anthropic is already running into these constraints.

10x Growth Limits: Physical Constraints Like Electricity, Data Centers, Semiconductors, and Land

AI companies dependent on ever-expanding compute face stark realities: increasing model capability requires vast new resources, but securing enough electricity, semiconductor supply, and land for data centers is becoming an ever-greater challenge. This imposes hard ceilings on both speed and scale of development.

Claude's "Thinking Less" Signals Ai Firms Hitting Infrastructure Limits Before Opus 4.7

There is growing evidence of these limits in user experience. Sacks points to complaints that Anthropic’s Claude model began “thinking le ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Market Valuations & Ai: Are Highs Justified Despite Unclear Roi?

Additional Materials

Clarifications

  • The Shiller Price/Earnings (PE) ratio, also called the cyclically adjusted PE (CAPE) ratio, measures stock market valuation by dividing the current price of a stock index by the average inflation-adjusted earnings of the companies over the past 10 years. This smooths out short-term fluctuations in earnings to provide a more stable, long-term view of market valuation. It helps investors assess whether the market is overvalued or undervalued relative to historical norms. High Shiller PE ratios often indicate that stocks may be expensive compared to their historical earnings.
  • The Buffett index compares the total market value of all publicly traded U.S. stocks to the country's gross domestic product (GDP). It serves as a broad measure of market valuation relative to the size of the economy. A high Buffett index suggests stocks may be overvalued compared to economic output. Warren Buffett uses it to gauge whether the stock market is expensive or cheap overall.
  • Gross margin is the difference between a company's revenue and the cost of goods sold, showing how much profit is made before other expenses. Capital demands refer to the amount of money a company needs to invest in physical assets like equipment, buildings, or technology to operate and grow. High gross margins often indicate efficient production or strong pricing power, while high capital demands can limit profitability and scalability. Investors use these metrics to assess a company's financial health and growth potential.
  • Software economics benefit from near-zero incremental costs because once developed, software can be copied and distributed digitally at almost no expense. Physical-world companies face ongoing costs for materials, manufacturing, logistics, and infrastructure, which scale with production volume. This fundamental difference means software firms can achieve massive margins and rapid scaling, while physical companies incur proportional expenses that limit growth speed and profitability. Ignoring these distinctions leads to overvaluing physical companies as if they had software-like cost structures.
  • Mega-cap companies are firms with extremely large market capitalizations, often exceeding hundreds of billions of dollars. Their stock prices heavily influence market indices like the S&P 500 because these indices are weighted by market capitalization. This means that price movements in a few mega-cap stocks can disproportionately affect the overall index performance. As a result, the index may rise even if most smaller companies are stagnant or declining.
  • Dispersion in stock market performance refers to the variation in returns among different stocks or sectors. High dispersion means a few stocks perform very well while many others lag or decline. This creates an uneven market where overall indices may rise despite broad weakness. It signals increased risk and challenges for diversified investors.
  • Presidential statements can signal government intentions and policy directions, which affect investor confidence and market expectations. Geopolitical events create uncertainty or stability, influencing risk appetite and capital flows. Markets react quickly to perceived changes in conflict status, adjusting prices based on anticipated economic impacts. Traders often exploit predictable patterns of volatility tied to political developments.
  • During his presidency, Trump frequently used public statements and policy announcements to influence market sentiment, often creating sharp swings in investor confidence. This approach led to predictable cycles of fear and reassurance, which traders learned to anticipate and exploit for profit. His tactics included signaling potential conflicts or resolutions to sway market direction temporarily. This engineered volatility became a new normal, shaping trading strategies around political cues rather than purely economic fundamentals.
  • Warren Buffett’s large cash reserve signals his cautious stance amid high market valuations, indicating he sees limited attractive investment opportunities at current prices. Holding cash allows him to quickly buy undervalued assets during market downturns. This contrasts with retail investors who are aggressively buying AI-related stocks, suggesting Buffett doubts the sustainability of current market enthusiasm. His approach reflects a value investing philosophy focused on long-term fundamentals rather than short-term hype.
  • AI growth depends heavily on vast computational power, which requires specialized hardware like GPUs and advanced semiconductors. These chips consume significant electricity, making energy supply a critical bottleneck. Data centers housing this hardware need large physical spaces with cooling systems to prevent overheating. Limited availability of these resources restricts how quickly and extensively AI models can scale.
  • "Thinking less" refers to AI models generating shorter, simpler responses to reduce computational load. This happens when companies limit the model's processing to conserve scarce resources like electricit ...

Counterarguments

  • While AI ROI remains uncertain in some sectors, there are documented cases where AI has already delivered measurable productivity gains and cost savings, particularly in logistics, healthcare, and finance.
  • The comparison to the dot-com bubble may be overstated, as many leading AI and tech firms today have strong revenues, profits, and established business models, unlike many speculative companies of the 1990s.
  • The concentration of S&P 500 gains in mega-cap companies is not unprecedented and has occurred in previous market cycles; such concentration can persist for extended periods without immediate correction.
  • Traditional valuation metrics like the Shiller PE ratio may not fully account for structural changes in the economy, such as the rise of intangible assets and globalized business models.
  • Physical infrastructure constraints on AI growth are significant, but ongoing advances in semiconductor technology, renewable energy, and data center efficiency may alleviate some of these bottlenecks over time.
  • Warren Buffett’s la ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
OpenAI's Identity Crisis, Datacenter Wars, Market Up on Iran News, Mamdani's First Tax, Swalwell Out

Enterprise Ai: Turning Tools Into Revenue or Remaining Theoretical?

Ai Productivity Claims and Enterprise Value Diverge as Firms Struggle to Translate Ai Investments Into Bottom-Line Improvements

Large enterprises are struggling to realize profitable outcomes from their AI investments, as highlighted by David Sacks and Travis Kalanick. Sacks references a major McKinsey study, concluding that many enterprise AI transformation projects are failing, primarily because of change management challenges and organizational inertia. Kalanick elaborates that, especially in big companies, resistance comes from entrenched layers of middle managers, technocrats, bureaucrats, and complex, often undocumented, processes. He emphasizes that change management is inherently a human, not strictly technological, struggle and proves particularly tricky in companies where critical procedures haven’t been fully mapped.

In contrast, small businesses and startups are demonstrating clear returns on investment from AI in niche domains. Jason Calacanis provides examples from his portfolio, such as Micro One, which leverages AI to enable dark pool data collection for large language model companies, and TaxGPT, an AI tool now used by six to seven percent of accountants, driving significant productivity in tax services. In these focused applications, AI is able to deliver market growth and operational efficiency.

However, Chamath Palihapitiya cautions that the success of startups and small players at the edges does not prove AI’s transformative power at enterprise scale. He contends that unless AI demonstrates effectiveness in high-leverage, complex enterprise use cases, its perceived value will remain largely theoretical. Meanwhile, enterprise customers are pushing back: they reject inadequate AI-generated content, demand greater accountability for spending, and CTOs report exhausted budgets while requiring strict justification for future AI expenditures.

Tech Company and Founder-Led Productivity Gains Show Success but Remain Exceptions, Not Indicators of Broad Ai Transformation

Founder-led and tech-native firms like Meta and Uber serve as notable exceptions to the sluggish enterprise AI adoption curve. Calacanis observes that these companies are experiencing accelerated feature rollouts post-AI integration, attributing this speed to a culture that embraces, rather than resists, AI-native change.

Yet, Kalanick and Palihapitiya stress that technology is not the main constraint—enterprise culture is the dominant bottleneck. Resistance from middle managers and process-bound bureaucrats slows or even stalls AI deployments. Further compounding this, Kalanick points out, are gaps in legacy system documentation—many historical processes are only informally known, making AI deployment particularly difficult when explicit instructions and logic are absent.

Ai Agents Excel In Automating Repetitive Tasks but Struggle With Novel Problem-Solving and Independent Decision-Making Without Human Guidance

AI agents today are effective at automating clearly defined, repetitive tasks, but they fall short when confronted with novel problems or when required to make decisions independently. Kalanick underscores the lack of sophistication in current agents, noting “they’re not that smart yet” and often make basic logical errors, such as taking conflicting positions on the same asset without explicit instruction. Both he and Calacanis agree that human oversight remains essential; AI agents require humans in the loop to provide direction, validation, an ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Enterprise Ai: Turning Tools Into Revenue or Remaining Theoretical?

Additional Materials

Clarifications

  • Change management in AI adoption refers to the structured approach organizations use to transition individuals, teams, and processes to new AI-driven ways of working. It involves addressing employee resistance, retraining staff, and redesigning workflows to integrate AI tools effectively. Successful change management ensures that technological changes are embraced and sustained, minimizing disruption and maximizing AI’s benefits. Without it, AI projects often fail due to cultural pushback and lack of alignment between technology and human factors.
  • Middle managers, technocrats, and bureaucrats act as intermediaries who enforce existing policies and processes within large enterprises. They often prioritize stability and risk avoidance, which can slow or resist changes like AI adoption. Their deep knowledge of informal, undocumented workflows makes it hard to alter or automate processes without their cooperation. This cultural and procedural inertia creates significant barriers to implementing new technologies effectively.
  • Dark pools are private financial exchanges where large investors trade securities anonymously to avoid impacting market prices. Collecting data from dark pools helps AI models analyze hidden trading patterns and liquidity not visible in public markets. This data is valuable for training large language models to understand complex financial behaviors and improve predictive accuracy. Access to dark pool data gives AI tools a competitive edge in financial analysis and decision-making.
  • Micro One uses AI to gather and analyze dark pool trading data, which are private financial exchanges not visible to the public, helping language model companies access unique market insights. TaxGPT applies AI to automate and enhance tax preparation tasks, improving accuracy and efficiency for accountants. Both tools focus on niche, specialized functions where AI can deliver measurable productivity gains. Their success highlights AI’s practical value in targeted applications rather than broad enterprise transformation.
  • "High-leverage, complex enterprise use cases" refer to AI applications that can significantly impact a company's core operations or revenue streams. These use cases involve intricate processes, multiple stakeholders, and large-scale data integration. Success in these areas can lead to substantial efficiency gains or competitive advantages. They are challenging because they require deep understanding of business context and seamless coordination across departments.
  • Enterprise culture often resists change due to established hierarchies and risk-averse mindsets. Middle managers may fear job displacement or loss of control, leading to pushback against AI initiatives. Complex, undocumented processes make it hard to integrate AI without disrupting workflows. Successful AI adoption requires aligning incentives, training, and clear communication to overcome these human and organizational barriers.
  • Legacy systems often rely on processes that were developed over time without formal documentation, meaning the exact steps and rules are not recorded. This "informal knowledge" typically resides in employees' heads, making it hard to transfer or automate. AI systems require clear, explicit instructions to function effectively, so undocumented processes create barriers to AI integration. Without detailed process mapping, AI cannot reliably replicate or improve these workflows.
  • Automating repetitive tasks means AI performs routine, predictable actions based on clear rules, like data entry or sorting emails. Novel problem-solving requires understanding new, complex situations without predefined instructions, demanding creativity and judgment. Current AI lacks true understanding and adaptability, so it struggles with tasks needing insight or innovation. Human guidance is essential to handle exceptions and make nuanced decisions beyond AI’s programmed capabilities.
  • Artificial General Intelligence (AGI) refers to a type of AI that can understand, learn, and apply knowledge across a wide range of tasks at a human-like level. Unlike narrow AI, which is designed for specific tasks, AGI would possess flexible reasoning and problem-solving abilities. Achieving AGI is considered a major milestone because it could perform any intellectual task a human can. However, it remains theoretical and has not yet been realized in pra ...

Counterarguments

  • Some large enterprises, such as those in the financial services and healthcare sectors, have reported measurable productivity gains and cost savings from AI-driven automation, fraud detection, and predictive analytics, indicating that profitable outcomes are possible in certain contexts.
  • The assertion that enterprise AI transformation is mostly theoretical may overlook incremental but significant improvements in areas like supply chain optimization, customer service automation, and risk management, which may not be headline-grabbing but contribute to bottom-line improvements.
  • While resistance from middle management and bureaucratic inertia are real challenges, some enterprises have successfully addressed these issues through targeted change management programs, cross-functional teams, and executive sponsorship.
  • The lack of fully mapped processes is not unique to AI adoption; previous technology waves (e.g., ERP, cloud migration) faced similar challenges and were eventually overcome, suggesting that AI adoption may follow a similar trajectory.
  • The claim that no AI application has demonstrated sufficient productivity gains to justify high valuations may not account for long-term investments and the time required for large-scale transformation in complex organizations.
  • Some AI tools, such as those used for cybersecurity, fraud detection, and predictive maintenance, have already demonstrated significant ROI and operational improvements at enterprise scale.
  • The comparison to the mobile revolution may not be entirely apt, as AI is a general-purpose technology with ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free

Create Summaries for anything on the web

Download the Shortform Chrome extension for your browser

Shortform Extension CTA