A person typing on a laptop keyboard

What if artificial intelligence (AI) isn’t humanity’s greatest threat, but its greatest opportunity for empowerment? In Superagency, Reid Hoffman (co-founder of LinkedIn) and writer Greg Beato challenge the dominant anxieties around AI, arguing that it can dramatically amplify individual human capabilities while creating collective benefits for society. 

They argue that the real risk isn’t AI becoming too powerful, but democratic societies withdrawing from AI development and ceding control to less benevolent actors. To learn more, continue reading our overview of Superagency.

Overview of Superagency

In Superagency, Reid Hoffman and Greg Beato argue that AI can help humanity achieve “superagency”—a state where AI amplifies individual human capabilities so dramatically that it creates collective benefits for society. Rather than viewing AI as a threat to human autonomy, Hoffman and Beato contend that if properly developed, it can function as an extension of human will, empowering people to accomplish more while preserving meaningful control over their lives and decisions.

This vision runs counter to much of the current discourse around AI, which often focuses on the possibility of catastrophes, job displacement, and loss of human relevance. Hoffman and Beato argue that these fears not only miss AI’s potential but also discourage people from helping to shape AI’s trajectory. Superagency draws on Hoffman’s experience scaling technologies like LinkedIn, his involvement in OpenAI and Inflection AI, and his work on books including Blitzscaling and The Startup of You. It also calls on Beato’s decades of experience writing about technology and culture for publications like The New York Times, Wired, and Nautilus.

What Is Superagency?

Imagine having a research assistant who never gets tired, never forgets anything, and can process information faster than any human. This assistant doesn’t make decisions for you, but it can help you understand complex topics, spot patterns in data, and explore novel solutions to problems. Now, imagine that this kind of support isn’t available just to you, but to everyone: students struggling with math, entrepreneurs building businesses, doctors diagnosing rare diseases, and researchers tackling climate change. This is the vision that Hoffman and Beato call “superagency,” a future where AI functions as an extension of your individual will, amplifying your abilities while keeping you in control of the decisions that matter. 

In this vision, AI works for you, helping you achieve outcomes you’ve chosen. This reframing directly addresses the most common fear about AI: that it’ll erode human autonomy. The authors argue the opposite is true. Properly designed AI systems increase your autonomy by giving you access to expertise, analysis, and insights that were once accessible only to specialists or the wealthy. A student in a rural area could access AI tutoring that rivals the offerings of elite universities. An entrepreneur could use AI to analyze market trends like a consulting firm. A patient could get preliminary medical guidance that helps them ask better questions during their doctor’s visit.

When Everyone Gets Superpowers, Everyone Benefits

The “super” in superagency refers to how individual AI empowerment creates benefits that multiply through society and even spread to those who don’t directly use AI tools. Think about how platforms like Wikipedia or LinkedIn create value. Wikipedia succeeds because millions of users contribute knowledge, corrections, and improvements—not because a small group of experts writes all the articles. LinkedIn became valuable because users built networks that benefited everyone on the platform, not because the company created all the professional connections. Hoffman and Beato call this the “private commons”—privately operated platforms that become more valuable as more people participate and contribute.

The principle of private commons applies to AI development as well, according to Hoffman and Beato. When AI tutoring systems help students learn more effectively, the benefits extend beyond those students to create a more educated workforce and a more innovative society. Likewise, when AI assists doctors in making more accurate diagnoses, it improves healthcare outcomes for entire communities. 

This collective dimension distinguishes superagency from simple productivity improvements. You don’t just get better at your existing tasks: You can access new possibilities that emerge as enhanced capabilities become widespread. The authors suggest that this could lead to breakthroughs on issues like climate change and poverty, which require coordinated action by individuals and institutions.

Why History Suggests This Vision Will Succeed

Hoffman and Beato contend that historical evidence supports their optimistic vision. The printing press and the internet both faced fierce initial resistance. Critics worried that books would damage memory and spread dangerous ideas, while the internet was feared to destroy face-to-face relationships and enable widespread misinformation. Yet each technology enhanced human agency and created collective benefits that were impossible to imagine beforehand. The printing press enabled the scientific revolution and modern democracy. The internet democratized access to information and enabled unprecedented global coordination.

Hoffman and Beato contend that the historical pattern they identify is consistent: Transformative technologies initially disrupt existing systems and create new problems, but they also unlock human potential in ways that benefit society as a whole. The authors explain that the key to achieving positive outcomes with new technology lies in actively engaging with it rather than passively resisting it. When people participate in shaping how technologies develop, they’re more likely to realize benefits and minimize harms. Hoffman and Beato argue that AI is following this same trajectory, and that societies embracing these changes will thrive while those resisting them will fall behind.

Why Pursue This Vision?

Given AI’s ability to completely change our society, why not take a cautious approach and restrict its development until we’re certain it’s safe? Hoffman and Beato argue that this seemingly prudent strategy is actually counterproductive and dangerous. They make the case that fear-based restrictions on new technologies typically increase rather than decrease risks. They also contend that the opportunity costs of delaying the development of beneficial AI applications are enormous and immediate, and that, rather than protecting us, overly cautious approaches to AI may be the greatest threat to human agency.

Hoffman and Beato frame their argument by identifying four distinct attitudes toward AI development. Some critics focus primarily on existential risks from superintelligent AI and advocate for slowing or stopping development altogether. Others worry about near-term harms like job displacement, algorithmic bias, and privacy violations. At the opposite extreme, some technologists want unrestricted AI development with minimal oversight, believing innovation should proceed at maximum speed. 

The authors reject all three of these approaches, positioning themselves as optimistic about AI’s potential but committed to responsible development that prioritizes human agency. This framework helps explain why they argue against both excessive caution and reckless acceleration, advocating instead for active public engagement in shaping AI’s development. In this section, we’ll explore why they believe fear-based restrictions backfire, and what they see as the real stakes of delaying beneficial AI applications.

Why Fear-Based Approaches to Regulating AI Will Backfire

Imagine if we’d outlawed the printing press after the first time a book spread misinformation, or restricted the internet because early users encountered spam. Hoffman and Beato argue that this is what many AI critics are proposing today: halting or severely restricting AI development because of the possible risks. The authors contend that this fear-based approach makes negative outcomes more likely because restriction often increases rather than decreases risk. When you try to prevent all possible dangers of a new technology, you typically push development underground or into the hands of less responsible actors. 

This means that heavy regulatory barriers won’t stop AI progress: They’d just shift it to organizations and nations that are less concerned with safety, transparency, and human rights. Hoffman and Beato note that these restrictions favor large corporations that can afford compliance costs while shutting out smaller innovators, academic researchers, and public interest groups. They argue that this creates exactly the concentration of power that critics claim to want to prevent, ensuring that only the biggest players shape AI. On the other hand, more competition among AI developers creates further incentives for responsible development, as companies with poor safety records lose credibility and market share.

Hoffman and Beato argue that AI systems become safer through innovation, not restriction. When AI systems are deployed gradually and monitored carefully, problems surface quickly and can be addressed before they become widespread. For example, early versions of ChatGPT revealed issues with generating false information and responding inappropriately to certain prompts, problems that could only surface and be addressed with large-scale real-world use. 

The Real Stakes: What We Risk by Waiting to Develop AI

Hoffman and Beato argue that real people are suffering from problems AI could help solve. Think about the lives of people you care about: A family member struggling with a rare disease could benefit from AI-powered drug discovery that accelerates the development of new treatments. A child who’s falling behind in school could thrive with AI tutoring that adapts to their learning style. Each day these tools remain undeveloped represents a missed opportunity to improve lives. The authors contend the question isn’t whether these benefits are worth some risk—it’s whether we can afford not to pursue them.

Perhaps most fundamentally, restrictive approaches carry the opportunity cost of reduced human agency itself. When people don’t have access to AI tools, they remain limited by constraints that technology could help them overcome. A small business owner trying to compete with large corporations could use AI for market analysis and customer service. A non-native speaker could use AI translation tools to participate in academic or professional discussions. A person with mobility limitations could use AI-powered systems to maintain greater independence. Restricting access to these capabilities doesn’t just slow technological progress—it limits human potential at precisely the moment when it could be expanding.

The Global Stakes Make Delay an Even Greater Risk

The global competition for AI leadership adds another layer of urgency that goes beyond individual benefits. Hoffman and Beato point out that countries that achieve AI dominance will capture disproportionate economic advantages across multiple industries, from manufacturing to financial services and healthcare. This won’t just enable them to have better technology, but it will help them maintain the economic competitiveness that supports jobs, wages, and living standards in your community. The countries that lead in AI will set the standards for how it’s used globally, just as the U.S. shaped internet governance by leading in internet development. 

The authors also warn that if democratic countries withdraw from AI development out of caution, they risk falling behind economically while ceding technological leadership to nations that prioritize surveillance and control over freedom and agency. Authoritarian governments are already deploying AI for surveillance, social control, and political suppression. If democratic societies don’t develop their own AI capabilities and frameworks, they may find themselves at an economic disadvantage and vulnerable to AI-enabled influence from abroad.

How Do We Get There?

If we want AI to enhance rather than replace human agency, how do we actually make that happen? Hoffman and Beato propose using what they call a “techno-humanist compass”—a navigational tool for making decisions about AI development that consistently points toward outcomes that enhance individual and collective human agency. Unlike a blueprint that tries to predict every challenge in advance, this compass provides more dynamic guidance. The authors’ recommendations center on two key principles guided by this compass: involving real people in shaping AI systems, and creating governance structures that can adapt as quickly as the technology evolves.

Develop AI Through Real-World Engagement

The most important principle for achieving beneficial AI is ensuring that ordinary people have a direct role in shaping how AI systems work. Hoffman and Beato argue that AI development should be an ongoing conversation between developers and the millions of people who will use the systems they build, rather than a top-down process where experts decide what’s best for everyone else. Instead of perfecting technologies in isolation before releasing them, developers can release AI systems gradually while gathering feedback from real users. The authors contend that AI systems can only be made more useful, safer, and better aligned with human values when they encounter the full diversity of human users. 

OpenAI’s approach with ChatGPT demonstrates how this works in practice: Rather than keeping the system locked in research laboratories until it was flawless, they released it to users while clearly communicating its limitations and actively soliciting feedback. Millions of users discovered creative applications, identified problematic responses, and provided insights that shaped subsequent improvements. 

Hoffman and Beato explain that public engagement also builds the trust and understanding necessary for AI to enhance human agency. When people develop informed opinions about what works well and what needs improvement, they can engage in better discussions about regulations and priorities. The authors emphasize that this approach requires responsible practices, not reckless experimentation. This means starting with limited user groups, establishing clear metrics for success and failure, maintaining human oversight throughout the process, and being prepared to modify or withdraw systems that cause problems.

Create Adaptive Governance That Evolves With Technology

The second key principle is developing governance approaches that can keep pace with rapidly evolving AI technology. Traditional regulation often creates rules that are either irrelevant by the time they’re implemented or so rigid they stifle innovation. Hoffman and Beato propose “adaptive regulation” that’s more like software development than traditional lawmaking: establish basic principles and frameworks, then iterate based on real-world outcomes. Instead of mandating specific technical approaches or prohibiting entire categories of AI development, regulations should focus on outcomes. Doing this would establish clear standards for safety, transparency, fairness, and accountability while allowing flexibility in how they are met.

This approach would require new models of collaboration between regulators, developers, and users. Hoffman and Beato explain that traditional regulatory agencies often lack the technical knowledge to understand AI systems or the agility needed to update rules quickly. They suggest creating specialized oversight bodies that combine technical expertise with regulatory authority and can operate at the speed of technological change rather than the traditional pace of government bureaucracy. International cooperation also becomes essential under this model: Democratic nations need to work together on establishing shared principles and standards while maintaining enough flexibility to adapt to local contexts and preferences. 

Hoffman and Beato contend that regular people like you play a crucial role in making adaptive governance work. This means staying informed about AI developments, providing feedback to companies developing AI systems, and engaging with your elected representatives about AI policies that reflect your values. Your choices as a consumer—supporting companies that develop AI responsibly and transparently—send market signals about what kinds of development should be prioritized. When millions of people actively participate in shaping AI governance rather than leaving it to experts, the resulting systems are more likely to enhance rather than diminish human agency—and to serve people like you, rather than just tech elites.

Superagency by Reid Hoffman & Greg Beato (Overview)

Katie Doll

Somehow, Katie was able to pull off her childhood dream of creating a career around books after graduating with a degree in English and a concentration in Creative Writing. Her preferred genre of books has changed drastically over the years, from fantasy/dystopian young-adult to moving novels and non-fiction books on the human experience. Katie especially enjoys reading and writing about all things television, good and bad.

Leave a Reply

Your email address will not be published. Required fields are marked *