PDF Summary:Superagency, by Reid Hoffman and Greg Beato
Book Summary: Learn the key points in minutes.
Below is a preview of the Shortform book summary of Superagency by Reid Hoffman and Greg Beato. Read the full comprehensive summary at Shortform.
1-Page PDF Summary of Superagency
What if artificial intelligence (AI) isn’t humanity’s greatest threat, but its greatest opportunity for empowerment? In Superagency, LinkedIn co-founder Reid Hoffman and writer Greg Beato challenge the dominant anxieties around AI, arguing that it can dramatically amplify individual human capabilities while creating collective benefits for society.
Rather than replacing human agency, they contend that AI will help you access expertise, analyze complex problems, and achieve goals that were previously out of reach. They argue that the real risk isn’t AI becoming too powerful, but democratic societies withdrawing from AI development and ceding control to less benevolent actors.
In our guide, we’ll explore Hoffman and Beato’s insights through the lens of a few key questions: What does beneficial AI look like? Are restrictions on its development counterproductive? And how can we actively participate in shaping AI’s future? Along the way, we’ll explore how Hoffman and Beato’s optimistic predictions compare with emerging research on AI’s effects and whether their proposals for shaping AI’s future can work in practice.
(continued)...
What People Really Think About AI
Recent polling suggests the patterns Hoffman and Beato identify in people’s attitudes toward AI might be better understood as overlapping spectrums rather than distinct categories. A 2024 Pew Research Center survey suggests that AI attitudes appear to operate along multiple independent sliding scales, which may help explain why Hoffman and Beato’s middle-ground position resonates: Most people hold nuanced views that combine optimism in some areas with caution in others, and someone can score high on one dimension while scoring low on another.
One dimension of AI opinion is excitement versus concern about AI’s potential. While some people are clearly more excited or worried, identical shares of experts and the public (38% each) say they’re equally excited and concerned, suggesting this exists on a continuum rather than as opposing camps. Another dimension is trust in different oversight mechanisms. Both experts (56%) and the public (55%) want more government regulation, but they lack confidence in both government effectiveness (53% of experts, 62% of public) and corporate responsibility (55% of experts, 59% of public). This suggests people can simultaneously support regulation while doubting regulations’ effectiveness.
A third dimension along which opinions about AI vary is optimism by application and timeframe. The same people can be optimistic about AI in healthcare while pessimistic about its impact on elections, or supportive of current AI development while worried about long-term risks. Experts are far more positive about AI’s effects on jobs and education compared to the public, but both groups are skeptical about its impact on elections and news. Finally, a fourth dimension is risk tolerance by context. University-based experts are much less confident in companies (60% lack confidence) than industry experts (39%), suggesting that even within the category of AI experts, a person’s institutional context shapes their risk assessment.
Why Fear-Based Approaches to Regulating AI Will Backfire
Imagine if we’d outlawed the printing press after the first time a book spread misinformation, or restricted the internet because early users encountered spam. Hoffman and Beato argue that this is what many AI critics are proposing today: halting or severely restricting AI development because of the possible risks. The authors contend that this fear-based approach makes negative outcomes more likely because restriction often increases rather than decreases risk. When you try to prevent all possible dangers of a new technology, you typically push development underground or into the hands of less responsible actors.
This means that heavy regulatory barriers won’t stop AI progress: They’d just shift it to organizations and nations that are less concerned with safety, transparency, and human rights. Hoffman and Beato note that these restrictions favor large corporations that can afford compliance costs while shutting out smaller innovators, academic researchers, and public interest groups. They argue that this creates exactly the concentration of power that critics claim to want to prevent, ensuring that only the biggest players shape AI. On the other hand, more competition among AI developers creates further incentives for responsible development, as companies with poor safety records lose credibility and market share.
(Shortform note: The idea that regulation of AI would increase risk finds a counterexample in social media, which developed with minimal oversight yet still became dominated by a few massive corporations. Rather than competing to develop the social network that would be most enriching or beneficial for users, platforms found that controversial and emotionally triggering content generated more engagement and ad revenue. Their competition for ad revenue incentivized them to optimize for outrage rather than accuracy, contributed to rising rates of anxiety and depression among teens, and led to extensive online surveillance—suggesting the absence of regulation doesn’t automatically lead to platforms that serve users’ best interests.)
Hoffman and Beato argue that AI systems become safer through innovation, not restriction. When AI systems are deployed gradually and monitored carefully, problems surface quickly and can be addressed before they become widespread. For example, early versions of ChatGPT revealed issues with generating false information and responding inappropriately to certain prompts, problems that could only surface and be addressed with large-scale real-world use.
(Shortform note: Hoffman and Beato argue that iteration makes AI safer, but mounting evidence suggests that this progress might not always be linear. Journalists reported in 2025 that the latest versions of systems from OpenAI, Google, and DeepSeek gave inaccurate responses to factual questions 33-79% of the time, rates much higher than previous versions. Some experts argue that these effects aren’t minor bugs, but basic consequences of how AI works: by generating plausible-sounding text that may not accurately represent reality. At the same time, other experts cite concerns that users who rely heavily on tools like ChatGPT also show weaker critical thinking skills and overconfidence in AI-generated information.)
The Real Stakes: What We Risk by Waiting to Develop AI
Hoffman and Beato argue that real people are suffering from problems AI could help solve. Think about the lives of people you care about: A family member struggling with a rare disease could benefit from AI-powered drug discovery that accelerates the development of new treatments. A child who’s falling behind in school could thrive with AI tutoring that adapts to their learning style. Each day these tools remain undeveloped represents a missed opportunity to improve lives. The authors contend the question isn’t whether these benefits are worth some risk—it’s whether we can afford not to pursue them.
(Shortform note: The opportunity cost argument suggests that the problems AI could solve emerge from artificial barriers to innovation, but as Ezra Klein and Derek Thompson note in Abundance, some of these problems reflect deeper limitations. For instance, rare disease research faces constraints like small patient populations that make clinical trials difficult despite AI advances. Meanwhile, educational inequality stems from deeply entrenched structural factors such as segregated schools and unequal funding that reflect and reinforce social hierarchies even as technological tools develop. These issues suggest we’ll need to address structural barriers, as well as technological ones, for AI to deliver the benefits the authors envision.)
Perhaps most fundamentally, restrictive approaches carry the opportunity cost of reduced human agency itself. When people don’t have access to AI tools, they remain limited by constraints that technology could help them overcome. A small business owner trying to compete with large corporations could use AI for market analysis and customer service. A non-native speaker could use AI translation tools to participate in academic or professional discussions. A person with mobility limitations could use AI-powered systems to maintain greater independence. Restricting access to these capabilities doesn’t just slow technological progress—it limits human potential at precisely the moment when it could be expanding.
(Shortform note: Protecting human agency as AI evolves is an emphasis Hoffman and Beato share with many researchers. Philosopher Luciano Floridi worries that AI represents “agency without intelligence”—systems that can act purposefully in the world without consciousness or understanding. This isn’t just a semantic distinction: If AI operates as an independent form of agency rather than a tool extending human agency, it could narrow the scope of human thought and action in subtle ways. Researchers already say that AI systems can reduce the range of choices people consider, limit serendipitous discoveries, and lead to “de-skilling” as people offload cognitive tasks to machines, demonstrating how quickly human agency can be affected.)
The Global Stakes Make Delay an Even Greater Risk
The global competition for AI leadership adds another layer of urgency that goes beyond individual benefits. Hoffman and Beato point out that countries that achieve AI dominance will capture disproportionate economic advantages across multiple industries, from manufacturing to financial services and healthcare. This won’t just enable them to have better technology, but it will help them maintain the economic competitiveness that supports jobs, wages, and living standards in your community. The countries that lead in AI will set the standards for how it’s used globally, just as the U.S. shaped internet governance by leading in internet development.
(Shortform note: Experts say AI leadership requires more than just speedy development; it also requires access to large, diverse datasets; capital; supportive regulatory frameworks; and the capacity for patents and research. But it’s unclear whether national strategy influences where breakthroughs emerge. Stephen Witt notes in The Thinking Machine that Nvidia’s AI dominance stems from decisions made in the 1990s for video game graphics, not from AI policy, which didn’t yet exist. Similarly, while China’s government directs AI development through subsidies and policy, DeepSeek’s breakthrough appears to have surprised even industry analysts, suggesting it emerged from private innovation rather than government planning.)
The authors also warn that if democratic countries withdraw from AI development out of caution, they risk falling behind economically while ceding technological leadership to nations that prioritize surveillance and control over freedom and agency. Authoritarian governments are already deploying AI for surveillance, social control, and political suppression. If democratic societies don’t develop their own AI capabilities and frameworks, they may find themselves at an economic disadvantage and vulnerable to AI-enabled influence from abroad.
(Shortform note: Authoritarian regimes don’t just use AI for surveillance; they also accelerate development by providing companies with vast amounts of surveillance data to train their AI. Chinese AI companies with access to data from thousands of police surveillance cameras produce 49% more software products than those without such access, since facial recognition algorithms trained on surveillance footage can also power commercial applications. This creates a feedback loop: Repression generates datasets to train more sophisticated AI, which then enables more effective repression. For democracies, this means the challenge isn’t only to develop competitive AI, but to foster innovation while protecting civil liberties.)
How Do We Get There?
If we want AI to enhance rather than replace human agency, how do we actually make that happen? Hoffman and Beato propose using what they call a “techno-humanist compass”—a navigational tool for making decisions about AI development that consistently points toward outcomes that enhance individual and collective human agency. Unlike a blueprint that tries to predict every challenge in advance, this compass provides more dynamic guidance. The authors’ recommendations center on two key principles guided by this compass: involving real people in shaping AI systems, and creating governance structures that can adapt as quickly as the technology evolves.
What Is Techno-Humanism?
Hoffman and Beato’s “techno-humanist compass” builds on the philosophical tradition of humanism, which places human welfare and dignity at the center of moral thinking. This approach emerged from a broader movement led by thinkers like Jason Crawford. Crawford argues that technology has dramatically expanded human choice: giving people more options about where to live, whom to marry, whether to have children, how to express themselves, and what knowledge to pursue. For techno-humanists, this expansion of choice is the ultimate goal of technological progress because it translates into greater human agency.
The movement gained urgency as modern AI arrived because traditional tech optimists assume that faster progress automatically benefits humanity. Critics say this might not hold true for AI, which could develop goals that conflict with our values. Techno-humanists believe we need to design technology to actively promote our values, but this idea faces practical challenges: Techno-humanism doesn’t specify which values should take priority when they conflict—for instance, individual freedom versus collective safety, or present benefits versus future risks. A compass pointing toward “human agency” may not provide clear guidance when different groups disagree as to what constitutes human flourishing.
Develop AI Through Real-World Engagement
The most important principle for achieving beneficial AI is ensuring that ordinary people have a direct role in shaping how AI systems work. Hoffman and Beato argue that AI development should be an ongoing conversation between developers and the millions of people who will use the systems they build, rather than a top-down process where experts decide what’s best for everyone else. Instead of perfecting technologies in isolation before releasing them, developers can release AI systems gradually while gathering feedback from real users. The authors contend that AI systems can only be made more useful, safer, and better aligned with human values when they encounter the full diversity of human users.
OpenAI’s approach with ChatGPT demonstrates how this works in practice: Rather than keeping the system locked in research laboratories until it was flawless, they released it to users while clearly communicating its limitations and actively soliciting feedback. Millions of users discovered creative applications, identified problematic responses, and provided insights that shaped subsequent improvements.
Hoffman and Beato explain that public engagement also builds the trust and understanding necessary for AI to enhance human agency. When people develop informed opinions about what works well and what needs improvement, they can engage in better discussions about regulations and priorities. The authors emphasize that this approach requires responsible practices, not reckless experimentation. This means starting with limited user groups, establishing clear metrics for success and failure, maintaining human oversight throughout the process, and being prepared to modify or withdraw systems that cause problems.
The ChatGPT Reality Check
Hoffman and Beato cite OpenAI’s ChatGPT as a successful example of iterative deployment and democratic participation. But the details of the rollout in late 2022 and early 2023 suggest gaps in the company’s approaches to both of these goals (and not just because ChatGPT gained a million users in its first week). OpenAI said it released ChatGPT to the public to get users’ feedback, framing the launch as an experimental collaboration with users. But the company gave minimal guidance about what the system does or doesn’t do: On launch, OpenAI included only a brief warning that the system “may occasionally generate incorrect information,” language that could apply to virtually any information source.
Users quickly discovered that ChatGPT routinely fabricated entire legal cases, academic papers, and news stories while presenting this false information with complete confidence. While OpenAI presented ChatGPT as an experimental system requiring user feedback, it designed the tool to mimic the authoritative interface of a search engine without explaining that it doesn’t understand language or distinguish fact from fiction. As a result, OpenAI was criticized for providing insufficient context for meaningful feedback from users. This criticism calls into question whether having access to tools like ChatGPT enhances human agency or if it could encourage dependency on an unreliable information source.
Create Adaptive Governance That Evolves With Technology
The second key principle is developing governance approaches that can keep pace with rapidly evolving AI technology. Traditional regulation often creates rules that are either irrelevant by the time they’re implemented or so rigid they stifle innovation. Hoffman and Beato propose “adaptive regulation” that’s more like software development than traditional lawmaking: establish basic principles and frameworks, then iterate based on real-world outcomes. Instead of mandating specific technical approaches or prohibiting entire categories of AI development, regulations should focus on outcomes. Doing this would establish clear standards for safety, transparency, fairness, and accountability while allowing flexibility in how they are met.
(Shortform note: Hoffman and Beato are confident that regulations can evolve quickly enough to keep pace with technological change. But what if AI capabilities emerge that require completely reimagining our legal and ethical frameworks? Consider an analogous scenario: Scientists are using AI to decode whale communication. If they find that whales have complex language, social cognition, and consciousness, our entire legal system based on human-only rights would need reconstruction, not just updating. Similarly, if AI systems develop genuine autonomy, begin making independent decisions, or start communicating in ways we don’t understand, we may struggle to adapt our existing regulations to realities they weren’t designed to handle.)
This approach would require new models of collaboration between regulators, developers, and users. Hoffman and Beato explain that traditional regulatory agencies often lack the technical knowledge to understand AI systems or the agility needed to update rules quickly. They suggest creating specialized oversight bodies that combine technical expertise with regulatory authority and can operate at the speed of technological change rather than the traditional pace of government bureaucracy. International cooperation also becomes essential under this model: Democratic nations need to work together on establishing shared principles and standards while maintaining enough flexibility to adapt to local contexts and preferences.
(Shortform note: Legal experts say regulation always lags behind technological development, but AI presents a unique version of the problem that may necessitate the specialized regulatory bodies Hoffman and Beato propose. Unlike past innovations that required changes to specific areas of law, AI impacts patents, copyrights, antitrust, privacy, and criminal justice, which makes it harder for regulators to address. The core issue isn’t just speed: It’s that our legal system enables companies to deploy new technologies until explicitly told they can’t. The challenge is acute with AI surveillance technologies, where law enforcement agencies use facial recognition and data analysis tools that operate in legal gray areas, and regulations remain years behind.)
Hoffman and Beato contend that regular people like you play a crucial role in making adaptive governance work. This means staying informed about AI developments, providing feedback to companies developing AI systems, and engaging with your elected representatives about AI policies that reflect your values. Your choices as a consumer—supporting companies that develop AI responsibly and transparently—send market signals about what kinds of development should be prioritized. When millions of people actively participate in shaping AI governance rather than leaving it to experts, the resulting systems are more likely to enhance rather than diminish human agency—and to serve people like you, rather than just tech elites.
The Illusion of Consumer Choice
The argument that you can shape AI development through your choices assumes you have meaningful alternatives to choose from, but in practice, tech companies have a history of making this difficult. In 2019, journalist Kashmir Hill attempted to avoid Amazon, Apple, Facebook, Google, and Microsoft but found it nearly impossible to function online without relying on these companies. Amazon’s web hosting services power countless websites, and Google’s infrastructure is so widespread that almost every website Hill visited used Google fonts, analytics, or bot-detection services, causing her entire internet experience to slow down when she blocked Google.
Similar experiments by other journalists reach the same conclusion: While alternatives exist for specific services, the tech giants have become so embedded in our digital infrastructure that true choice often proves illusory. It seems reasonable to worry that the situation has only gotten worse with AI: Critics contend that opting out of AI systems online has become virtually impossible. Google automatically places AI-generated answers above traditional search results, while Facebook, Instagram, and LinkedIn prompt users to engage with AI tools for creating content. Meanwhile, AI has become embedded in critical services like healthcare, finance, and hiring, and AI tools are constantly collecting information about you.
Despite these limitations, experts say there are concrete steps you can take to exercise some control: Many major AI platforms now offer opt-out options for data training. You can prevent ChatGPT from using your conversations to improve future models, stop LinkedIn from training AI on your posts, and block AI crawlers from scraping your web activity. When using AI tools, you can avoid sharing personally identifiable information, delete conversation histories, and set data to auto-expire after certain periods. These measures give you more agency over your interactions with AI systems, which aligns with Hoffman and Beato’s emphasis on individual empowerment, even if the broader choice architecture remains limited.
Want to learn the rest of Superagency in 21 minutes?
Unlock the full book summary of Superagency by signing up for Shortform.
Shortform summaries help you learn 10x faster by:
- Being 100% comprehensive: you learn the most important points in the book
- Cutting out the fluff: you don't spend your time wondering what the author's point is.
- Interactive exercises: apply the book's ideas to your own life with our educators' guidance.
Here's a preview of the rest of Shortform's Superagency PDF summary:
What Our Readers Say
This is the best summary of Superagency I've ever read. I learned all the main points in just 20 minutes.
Learn more about our summaries →Why are Shortform Summaries the Best?
We're the most efficient way to learn the most useful ideas from a book.
Cuts Out the Fluff
Ever feel a book rambles on, giving anecdotes that aren't useful? Often get frustrated by an author who doesn't get to the point?
We cut out the fluff, keeping only the most useful examples and ideas. We also re-organize books for clarity, putting the most important principles first, so you can learn faster.
Always Comprehensive
Other summaries give you just a highlight of some of the ideas in a book. We find these too vague to be satisfying.
At Shortform, we want to cover every point worth knowing in the book. Learn nuances, key examples, and critical details on how to apply the ideas.
3 Different Levels of Detail
You want different levels of detail at different times. That's why every book is summarized in three lengths:
1) Paragraph to get the gist
2) 1-page summary, to get the main takeaways
3) Full comprehensive summary and analysis, containing every useful point and example