In this episode of All-In with Chamath, Jason, Sacks & Friedberg, the hosts examine Anthropic's decision to withhold its new AI model due to security concerns, while questioning whether this represents genuine risk management or strategic positioning. The conversation covers Anthropic's record-breaking revenue growth to $30 billion and the tension between proprietary AI development and open-source alternatives that are rapidly closing the performance gap at significantly lower costs.
The episode also addresses geopolitical developments, including the Trump administration's ceasefire negotiations with Iran and concerns about Netanyahu's influence on U.S. foreign policy. The hosts discuss how declining American support for Israel is prompting calls for diplomatic solutions. Additionally, they explore how AI-powered translation on X is enabling real-time cross-border communication and changing how global events are perceived beyond traditional media channels.

Sign up for Shortform to access the whole episode summary along with additional materials like counterarguments and context.
Anthropic's new AI model, Mythos, is being withheld from public release due to its ability to autonomously discover thousands of previously undetected security vulnerabilities—including exploits dating back 20 to 27 years in major systems like OpenBSD firewalls and the Linux kernel. According to Jason Calacanis, Anthropic organized Project Glasswing, a coalition of 40 major tech companies, to spend 100 days using AI to find and fix vulnerabilities before hackers can exploit them.
However, Anthropic faces skepticism about whether these warnings reflect genuine risk management or marketing strategy. David Sacks points out that comparable open-source AI models are widely available without the predicted catastrophic incidents. Chamath Palihapitiya notes that existing models could already exploit similar vulnerabilities, questioning whether Mythos represents an unprecedented threat or if the 100-day exclusive access period primarily benefits enterprise partners while positioning Anthropic favorably in the market.
Anthropic has achieved unprecedented financial milestones, reaching a $30 billion revenue run rate by April 2025—just months after crossing the $1 billion threshold. This explosive growth, particularly in the coding vertical following Claude Code's launch, marks the largest revenue explosion in technology history. The company now commands over 50% market share in coding tokens and employs just 2,500 people compared to Google's 120,000 at similar revenue levels.
Despite this success, profitability remains uncertain. The dominant cost is compute infrastructure, with top hyperscalers planning $350 billion in capital expenditures this year. While inference costs have dropped 90% year-over-year and margins are expanding rapidly, the path to sustained profitability depends on whether revenue can continue outpacing massive infrastructure investments.
Openclaw has become the most popular open source project on GitHub by creating an open architecture for autonomous agents. Calacanis explains that Anthropic responded by ending discounted subscription access for Openclaw users, switching them to costly per-token pricing, and launching its own competing Claude-managed agent technology within ten days. This timing suggests a deliberate strategy to incorporate Openclaw's features and replace it with a proprietary alternative.
Meanwhile, decentralized open-source models are threatening capital-heavy frontier companies. Bittensor's subnet 62 achieved 80% of Claude 4's performance in just 45 days using $1 million in token rewards and community contributions. Palihapitiya highlights that open source orchestration combined with decentralized incentives bypasses the massive capital requirements protecting frontier model companies. The future viability of proprietary models depends on maintaining performance advantages as open source alternatives close the gap at much lower cost.
The Trump administration secured a fragile two-week Iran ceasefire through high-stakes diplomacy, with Vice President JD Vance and Jared Kushner leading talks in Pakistan. Calacanis reports that Trump issued ultimatums via social media, imposing an 8pm deadline before announcing the ceasefire at 6:30 p.m. on Truth Social. However, Netanyahu's forces used this window to launch heavy bombardments in Lebanon, highlighting the agreement's fragility.
Concern is mounting about Netanyahu's influence over U.S. policy. Following their February 11 White House meeting, Netanyahu pushed a four-part proposal urging U.S. military action on Iran, which General Dan Cain criticized as overselling under-developed plans. Calacanis relays widespread concerns among American Jews that Netanyahu's aggressive strategy is provoking antisemitism and damaging diaspora interests. Polling shared by Naftali Bennett shows declining U.S. support for Israel, prompting warnings about the need for diplomatic "off-ramps."
Financial markets have shown muted reactions, with the S&P and NASDAQ experiencing only 5–7% drawdowns during peak tensions. Brad Gerstner notes this resilience reflects investor confidence in avoiding prolonged conflict.
AI-powered translation on X, implemented through Grok, now enables real-time communication between users speaking different languages. Posts are automatically translated into users' native languages, and replies are translated back, allowing seamless engagement between communities in Japan, Israel, France, Russia, China, and beyond. This technology reveals authentic grassroots conversations previously inaccessible to non-native speakers, acting as a new mechanism for understanding world events directly from those experiencing them rather than through media filters.
This shift in global information flow fundamentally changes how conflicts are perceived internationally. X has achieved these innovations while operating with 70% fewer employees since Elon Musk's takeover, demonstrating that AI-driven efficiencies can sustain major technological advancement without large teams.
1-Page Summary
Anthropic's new AI model, Mythos, is being withheld from general release over concerns about its power to uncover and exploit deep-seated vulnerabilities in the world's software infrastructure. According to Jason Calacanis, Anthropic claims that Mythos autonomously discovered thousands of security vulnerabilities that automated detection had missed for decades, including 20- and even 27-year-old exploits in major operating systems and browsers such as OpenBSD firewalls, FFmpeg, and the Linux kernel. These exploits include foundational bugs ignored by years of security audits and five million automated scans.
To address the gravity of these capabilities, Anthropic organized Project Glasswing—a coalition of 40 major technology companies, including Apple, Microsoft, Google, Amazon, and JP Morgan. The goal is to dedicate 100 days to using AI to find, fix, and harden software before hackers have a chance to exploit discovered vulnerabilities. This aggressive approach of sandboxing the model and restricting its public release is seen as a responsible effort to avoid widespread disruption to internet infrastructure, which could occur if Mythos were unleashed prematurely. The company acknowledges that the model could facilitate offensive hacking by exposing credit cards and browser histories, so they chose not to risk a wide release.
Despite these responsible measures, Anthropic faces skepticism regarding the sincerity and necessity of its safety messaging. The company has a pattern of publishing studies outlining catastrophic scenarios associated with their technologies, such as a blackmail study that involved prompting one of their models over 200 times to generate concerning behavior. David Sacks highlights that even though comparable open-source AI models are now widely available, there have been no real-world incidents matching these dire predictions, calling into question the credibility of Anthropic's warnings.
This skepticism is reinforced by recalling similar tactics used in 2019 for the release of GPT-2. Then, dramatic fears were circulated about the dangers of a 1.5 billion parameter model, which ultimately had a negligible real-world impact. The hosts argue that this cycle of hyped warnings followed by uneventful outcomes resembles a deliberate marketing playbook more than genuine risk management.
Additionally, the scale and severity of vulnerabilities allegedly found by Mythos suggest they would require years-long internet shutdowns to fully patch. Chamath Palihapitiya points out that even without Mythos, advanced hackers could exploit similar vulnerabilities using Anthropic’s existing Opus model. Therefore, critics doubt that Mythos represents an unprecedented security threat or that its containment alone is meaningful when comparable risks already exist.
The decision to sandbox Mythos for 100 days provides Anthropic and its enterprise collaborators with a major competitive edge. By granting exclusive access, these companies can bolster their own cybersecurity while the general public remains protected from potential harm. David Sacks argues that this pre-release period is sensible, allowing companies with large codebases to proactively detect an ...
Ai Model Security and Capabilities
AI market competition has reached intense levels, with Anthropic achieving unprecedented financial and operational milestones in 2024 and early 2025. The rapid surge exemplifies both the promise and the challenges of generative AI, especially as infrastructure and capital requirements shape the playing field.
Earlier in the year, AI investment skeptics questioned the return on the hundreds of billions in capital expenditures for data centers. However, Anthropic’s trajectory has now justified these investments, especially in the coding vertical, which is experiencing extraordinary revenue growth.
Anthropic only began charging for API access in early 2023. By the end of 2024, it reached a $1 billion run rate. Mid-2025 saw the launch of Claude Code, acting as a catalyst for further growth. By February 2025, the revenue run rate surged to $4 billion, then $9 billion by the end of the first quarter, and an unprecedented jump to a $30 billion run rate by April, tripling within a single month. Just in the first quarter, Anthropic’s added revenue exceeded Databricks’ and Palantir’s annual revenue combined. This marks the largest revenue explosion in technology history.
Anthropic’s achievement is notable considering its minimal overhead: the company employs approximately 2,500 people compared to Google’s 120,000 when crossing similar revenue thresholds. The majority of expenses are tied to compute infrastructure, not workforce. This structural distinction allows for remarkable margin expansion, as the only major expenditure is compute. Additionally, more than 1,000 enterprise customers now pay over $1 million annually for Anthropic services. Such large, recurring contracts form the backbone of mature software companies and were historically coveted by leading firms like Salesforce and Slack, but are now commonplace for Anthropic.
Anthropic now commands over 50% market share in the coding assistance segment, specifically measured by coding tokens. This level of dominance in coding token provision gives Anthropic a strong flywheel effect: as the company assists more developers and generates more code, it gathers richer data for continued model training and refinement, which can further consolidate its lead.
AI-generated code currently constitutes about 5% of all global software code, with Anthropic leading this niche segment. Although still a small proportion, this share is expected to grow rapidly, potentially approaching near-total code generation over the coming years. The increasing output of AI-generated code offers enormous security and productivity opportunities, as organizations shift from marginal productivity boosts to exponential gains with these tools. The growth disproportionately benefits market share leaders, allowing early movers like Anthropic to consolidate their advantages through data scale effects.
However, as Chamath Palihapitiya notes, the AI code market still represents a small ‘thin slice’ of the overall software landscape, but rapid acceleration is drawing attention as even modest market shares equate to billions in annualized revenue.
Despite revenue surges, the path to sustained profitability for Anthropic and similar companies remains complex. The dominant fixed cost is compute infrastructure, not staff ...
Ai Market Competition and Revenue Growth
Openclaw has become the most popular open source project in history on GitHub by pioneering an open architecture for autonomous agents, attracting massive developer engagement and momentum. Jason Calacanis emphasizes Openclaw’s role as a phenomenon, with power users exploiting its integration with proprietary AI platforms to generate extraordinary usage—some extracting far more value than typical subscribers.
Anthropic responded to Openclaw-driven demand by ending discounted subscription access for Openclaw users. Previously, developers using Openclaw could pair a $200 monthly subscription to Anthropic’s Claude model, but their consumption often massively exceeded the price paid, with some users drawing $2,000 to $20,000 worth of tokens. Anthropic moved these users to a pay-per-token API model, drastically raising costs and effectively cutting off the favorable access that had made Openclaw so attractive.
This access cut aligned with Anthropic’s strategic timing: within ten days of the pricing change, Anthropic announced its new Claude-managed agent technology. Calacanis and Sacks argue this points to a deliberate sequence—Anthropic systematically incorporated Openclaw's agent features into Claude and then removed the subsidized access, replacing Openclaw in their ecosystem with a managed, proprietary alternative.
A slew of competitors have entered the autonomous agent space, underlining industry-wide recognition of agents as the next frontier of AI value. These include Perplexity Computer’s agent, Hermes agent (open source), Alibaba’s Quinn model, Elon Musk’s Grok, and anticipated major upgrades for Amazon’s Alexa and Apple’s Siri. While Microsoft has yet to announce its own new agent product, this wave signals both the threat and the opportunity presented by agent-based AI systems.
Open source initiatives are proving their capacity to achieve near-frontier-model performance at much lower costs. Bittensor’s subnet 62, Ridgesai, achieved 80% of Claude 4’s performance in just 45 days by distributing about $1 million in token rewards to contributors. Anyone could help improve the code anonymously, and their open structure enabled rapid, community-driven development. Calacanis compares this pattern to the incentivized participation and robust flywheel seen in economic networks like Bitcoin.
Chamath Palihapitiya points out that open source orchestration combined with decentralized incentives is a disruptive force. Unlike proprietary models that require tens of billions in capital for training and infrastructure, distributed open source and token-rewarded ecosystems bypass these requirements, challenging the dominance of capital-intensive frontier AI companies.
Calacanis underlines that longstanding open source tools such as Linux, Kubernetes, Apache, Postgresql, and Terraform now thrive as core infrastructure in enterprise environments. He notes that early skepticism about their mission-critical adoption has disappeared, and suggests open source coding too ...
Open Source vs. Proprietary Ai
The Trump administration, amidst rising tensions between Iran and Israel, has secured a fragile two-week ceasefire through urgent diplomacy and high-stakes brinkmanship. Jason Calacanis reports that the announcement came following a string of dramatic and threatening messages from former President Trump. Over Easter weekend, Trump used social media to convey ultimatums to Iranian leadership, threatening catastrophic consequences if the Strait was not opened and imposing an 8pm deadline for a resolution. The ceasefire was promptly announced on Truth Social at 6:30 p.m., conditional upon Iran opening the Strait, with Trump referencing a 10-point proposal from Iran deemed workable for ongoing negotiations.
Concurrently, a U.S. team led by Vice President JD Vance and special consultants including Jared Kushner is traveling to Islamabad, Pakistan, to conduct further diplomatic talks. David Sacks emphasizes the significance of this diplomatic effort, commending Trump for preferring negotiations over military escalation and sending a clear message to avoid the natural escalation “ladder” of regional wars. The team’s composition—with figures such as Vance and Kushner as key representatives rather than military officials—signals a preference for diplomacy and strategic de-escalation.
Despite the pause in direct confrontation between Iran and Israel, the ceasefire has only temporarily halted escalation and triggered further military actions. For instance, Jason Calacanis notes Netanyahu’s forces used the window to launch heavy bombardments in Lebanon, indicating the fragility and potential volatility of the agreement as broader U.S. strikes ripple through the region.
Concern is mounting among both U.S. policymakers and within the American Jewish community regarding the level of Israeli, and specifically Netanyahu’s, influence over U.S. policy in a region teetering on the brink of war. On February 11, Netanyahu met with Trump at the White House, pushing a four-part proposal urging U.S. military action on Iran. General Dan Cain, Joint Chiefs of Staff chair, labeled this a hallmark of Israeli negotiation strategy: overselling under-developed plans out of dependency on U.S. backing.
There were divisions within the U.S. team. JD Vance warned Trump that going to war could destabilize the region and fracture the MAGA 2.0 coalition ahead of the election. Senator Rubio openly opposed regime change but remained ambivalent about the bombing campaign, underscoring deep uncertainty about the wisdom of heavy U.S. involvement.
Some fear that Netanyahu’s actions—particularly aggressive maneuvers in Gaza, Lebanon, and Iran—have exacerbated antisemitism in the U.S. and sown discord within pro-Israel groups. Jason Calacanis relays widespread concerns among American Jews, who see Netanyahu’s military strategy as provoking backlash and damaging the diaspora’s safety and interests. Netanyahu’s perceived overreach is cited as a driving force behind rising domestic antisemitism and growing skepticism about Israel’s strategic value as an ally.
Polling shared by Israeli politician Naftali Bennett, discussed by David Sacks, reveals a decline in U.S. popular support for Israel. This shift prompts Israeli leaders to rethink their approach; Bennett and other critics warn that failure to recalibrate and align with U.S. expectations could endanger the historically steadfast alliance. Chamath Palih ...
Middle East Foreign Policy and Geopolitics
Recent advances in platform innovation leveraging AI translation have revolutionized the way people communicate internationally, bypassing traditional media filters and allowing for direct cultural exchanges in real time.
AI-powered translation systems, such as the one implemented by Grok on the social media platform X, now enable seamless communication between users who speak different languages. Posts are automatically translated into users' native languages, and any replies they send are auto translated back into the original language of the recipient. This mechanism allows users from countries like Japan, Israel, France, Russia, and China to both express themselves in their own language and understand others without any professional translation services. As a result, Americans can engage meaningfully with Japanese users, and the same applies across other languages, enriching the diversity of interactions.
This technology also uncovers authentic, grassroots cultural and political conversations that have previously been inaccessible to non-native speakers and often overlooked by journalists. Discussions that were once confined to local language communities are now available for global observation. The feature acts as a new truth mechanism for understanding world events, revealing the complexities and nuances of issues directly from those experiencing them, rather than through selective or incomplete media coverage. This democratic access to grassroots perspectives can have significant effects, especially during world-shaping events or conflicts, providing real-time cultural feedback and viral dissemination of local viewpoints.
The development marks a dramatic shift in how information flows globally and how conflicts are perceived outside their immediate geography. Previously, language acted as a constraint on international communication, restricting engagement to those with translation reso ...
Platform Innovation and Cross-Border Communication
Download the Shortform Chrome extension for your browser
