In this episode of All-In, Sarah B. Rogers joins Jason Calacanis and David Sacks to examine the differences between US and European approaches to free speech regulation. The discussion covers recent European legislation like the Digital Services Act and its impact on American tech companies, along with how governments use indirect methods to influence content moderation despite First Amendment protections.
The panel explores how the definition of "disinformation" has evolved and its implications for content moderation on tech platforms. They analyze emerging solutions for managing online content, including AI-powered fact-checking tools and community-driven moderation systems, with particular attention to how these approaches can bridge divides between opposing viewpoints while maintaining free speech principles.

Sign up for Shortform to access the whole episode summary along with additional materials like counterarguments and context.
In a discussion between Sarah B. Rogers, Jason Calacanis, and David Sacks, the panel explores the stark differences between US and European approaches to free speech regulation, particularly as they affect technology companies.
Rogers explains how cultural and historical differences have led to divergent attitudes toward free speech in the US and Europe. She points to recent European legislation—the UK's Online Safety Act and the EU's Digital Services Act—which impose strict content moderation requirements on tech platforms. These regulations, according to Rogers, are having extraterritorial impacts on American websites, leading to fines for content that's legal in the US but prohibited in Europe.
David Sacks describes the DSA as a "digital speed trap" that functions as a tariff on American tech companies. The panel notes that even when American companies attempt to comply through geo-blocking, European regulators may still pursue legal action, highlighting the growing tension between US free speech norms and European regulatory demands.
Rogers discusses how governments are finding indirect ways to influence content moderation, noting that while the US government cannot directly censor speech under the First Amendment, it can exert pressure through various channels. Calacanis and Sacks point out that financial regulations and merger approvals are being used as leverage to influence tech platforms' content moderation policies.
The panel examines how the definition of "disinformation" has expanded to include legitimate viewpoints and adverse truths. Calacanis notes instances where factual information about COVID-19 was initially suppressed only to be later confirmed by official sources. The discussion turns to emerging alternatives for content moderation, with Sacks highlighting Grok, an AI fact-checking tool, and the community notes feature on social media. Rogers praises the algorithmic approach of community notes, particularly its ability to promote consensus between typically opposing viewpoints.
1-Page Summary
Sarah B. Rogers and Jason Calacanis weigh in on the contrasting free speech standards between the United States and Europe, with particular emphasis on recent regulatory measures in Europe that could affect American technology companies.
Rogers reflects on the cultural and historical differences underpinning the divergent attitudes toward free speech in the US and Europe. Using the Charlie Hebdo incident as a starting point, Rogers compares it to the United States' broad protection of free speech, where figures like Charlie Kirk and the creators of Charlie Hebdo could theoretically speak freely. She likens Europe’s suppression of dissent online to a form of "religious zealotry," comparing it to the extremism that attacked Charlie Hebdo's cartoonists.
Rogers discusses the impact of two European legislative acts on US tech platforms: the Online Safety Act (OSA) in the UK and the Digital Services Act (DSA) in the EU. The OSA imposes age-gating obligations on content and mandates risk assessments for—in some cases—content removal that the UK considers illegal. The DSA contains hate speech regulations that require EU member states to adopt a minimum floor for hate speech prohibition.
Rogers cites examples of enforcement that go beyond geographic borders, such as former EU officials threatening Elon Musk with enforcement action because of a potentially offensive interview hosted on Twitter Space. She expresses concern about the extraterritorial impact these regulations are having on American websites, which are fined by foreign regulators for content legal in the US but not under European laws.
David Sacks mentions that US companies face fines in the UK and EU that are potentially free speech-related. Rogers suggests that these fines have not yet materialized under the UK OSA, but the act’s provisions are gradually coming into effect, including AI-related restrictions.
Active litigation in American courts involving 4chan and a hefty fine mentioned during Rogers' recent European tour against an unnamed company heightens worries over the potential for a "censorship tariff." An infographic comparing revenue raised in the EU through fining ...
Us vs. Europe: Free Speech Analysis
Experts discuss the indirect ways in which governments are pressuring tech firms and other platforms to moderate speech in specific ways, bypassing direct legislative action.
The discourse on censorship involves a nuanced examination of how governments may use indirect means to compel tech firms to conform to certain speech standards.
Sarah B. Rogers highlights the challenge disinformation poses to the government's interaction with foreign and domestic audiences on the internet. She notes that the term "disinformation" now encompasses truthful information that might support an "adverse narrative." Rogers explains that while the government cannot directly force tech platforms to take down posts under the First Amendment, she suggests there is indirect pressure from the government. This insinuates that companies should take particular actions to stay in the government's good graces.
Jason Calacanis and David Sacks express their concerns about free speech and question the role of non-governmental organizations (NGOs) in exerting pressure on regulators. They hint at a potential "censorship industrial complex," where free speech may be at risk due to behind-the-scenes pressures.
Calacanis and Sacks further discuss the leverage that financial regulators have over tech platforms, specifically through the threat of blocking mergers. Calacanis suggests that tech companies might engage in self-censoring activities, l ...
Government Pressure and Intermediaries in Censorship
The conversation among commentators questions how "disinformation" is defined and the implications of current content moderation practices, suggesting newer models for a more balanced approach.
There is growing concern that the scope of what constitutes "disinformation" has broadened, potentially leading to the suppression of content, including legitimate viewpoints and adverse truths. Calacanis expresses unease over conversations that have been shut down, such as those involving the necessity of vaccines for children by the Biden administration, hinting at possible censorship. Other instances include suppressed claims about the COVID-19 vaccine's inability to completely prevent transmission and the potential lab origin of the virus—assertions that are later confirmed by entities such as a House committee and the CIA.
The discussion further critiques current content moderation practices, like fact-checking and labeling, which are often influenced by government pressure. Calacanis implies that these practices serve more to de-platform or de-monetize certain viewpoints rather than to enhance transparency. Rogers casts doubt on the role of NGOs in disinformation labeling as their actions lack transparency, especially when they encourage credit card companies and payment processors to withdraw funding from certain websites, keeping both the content creators and viewers in the dark.
Emergent user-driven and algorithmic approaches, such as community notes and AI fact-checking, are presented as viable alternatives to government-led content moderation.
David Sacks talks about Grok, an AI tool designed for fact-checkin ...
The Debate Around "Disinformation" and Content Moderation
Download the Shortform Chrome extension for your browser
