In November 2023, OpenAI’s board of directors fired CEO Sam Altman. Within five days, employee pressure, investor panic, and a threat from Microsoft forced them to take him back. That episode showed who has a say in decisions about one of our most powerful technologies: employees with equity stakes, investors with billions on the line, and tech giants with partnership agreements—not the workers who built AI, the communities that bear its environmental costs, or the public. In Empire of AI (2025), Karen Hao argues that firms like OpenAI work like empires: They extract resources from vulnerable communities, control knowledge to guard their interests, and justify it with the promise that artificial general intelligence (AGI) will benefit all of humanity.
The story of AI is usually told from one perspective: looking from Silicon Valley outward. Drawing on seven years of reporting, Hao tells it from the other direction: the vantage point of Kenyan workers paid poverty wages to scrub harmful content from ChatGPT’s training data, Chilean communities fighting to protect their drinking water from data centers, and researchers whose careers were ended for raising uncomfortable...
Unlock the full book summary of Empire of AI by signing up for Shortform.
Shortform summaries help you learn 10x better by:
Here's a preview of the rest of Shortform's Empire of AI summary:
Hao argues that OpenAI didn’t stumble into power: It was built toward it through a combination of strategy, a consequential technical bet, and a mission designed to attract massive resources while neutralizing opposition. In this section, we’ll trace how Sam Altman and his cofounders constructed the company, why a prediction called the scaling doctrine became its defining strategy, and how a quasi-religious belief in artificial general intelligence (AGI) made the whole enterprise feel not just profitable, but necessary.
(Shortform note: Broadly, researchers think of AGI as machine intelligence that understands the world as well as humans do. But experts haven’t agreed on a technical definition, and OpenAI’s formulation doesn’t clarify much, since machines are “smarter” than us at tasks like playing chess while still being inept at things toddlers find trivial. They also disagree on whether AGI is close, far, or possible: Surveys of researchers reveal no consensus for when high-level AI might be achieved, and serious estimates range from “next year” to “decades away.” In...
OpenAI’s scaling doctrine demanded something more costly than computing infrastructure: the extraction of human labor, physical resources, and creative work on a giant scale. Hao argues that, like historical colonial powers, AI companies have built a global system where value flows in one direction (from vulnerable communities toward a small elite) while the costs of that extraction remain invisible. The people bearing the burden are the same ones who’ve been subject to colonial extraction in the past: workers in crisis economies, Indigenous communities, and countries whose development was shaped by cycles of plunder. Here, we’ll examine the three forms of extraction that built the AI empire and what they cost.
(Shortform note: Scholars agree with Hao that we haven’t left the era of empires behind—we’ve just changed what they extract and how they justify it. In The Divide, Jason Hickel argues that Western powers never stopped extracting wealth from the Global South; they just developed subtler ways to do it:...
This is the best summary of How to Win Friends and Influence People I've ever read. The way you explained the ideas and connected them to other books was amazing.
Hao argues that AI’s extraction of resources and creative work couldn’t have continued without three protections that kept its operations invisible: control over what could be known about AI firms’ technology, influence over the rules they operated under, and suppression of dissent from within. Without the first, critics could have documented the harms and acted on them. Without the second, those harms might have faced meaningful oversight. Without the third, insiders could have blown the whistle.
Hao argues that early AI research operated on a principle of openness: Researchers published their methods, shared their data, and subjected their findings to peer review. That norm collapsed as AI became commercially valuable. Companies stopped publishing meaningful technical details about their models, instead treating their architectures, training data, and performance limitations as proprietary secrets. This meant that users couldn’t know how much energy their queries consumed, regulators couldn’t assess whether models were safe for medical or legal decisions, and independent researchers couldn’t verify the claims companies like OpenAI, Google, and...
Throughout the book, Hao argues that the AI industry has kept public debate focused on a question it can always answer to its advantage: whether AI is good for humanity. This directs attention toward what AGI could offer in a hypothetical future—and away from who controls or pays for it. Stanford researcher Ria Kalluri says we should ask a different question: Does AI concentrate or distribute power? Hao suggests this is the right question to ask now because it can be answered with evidence—and because it shifts the conversation from outputs and promises to ownership and power. Against this question, she says the current model fails.
Why “Is AI Good for Humanity?” Might Be the Wrong Question
Hao and Kalluri are making a move that’s common among scholars who study AI: asking who the technology answers to. Questions about power can be answered with reporting—you can trace a supply chain, follow a dollar, document which communities got consulted. Questions about AI’s benefits tend to dissolve into speculation about the future. Meanwhile, some scholars have proposed versions of the power question that complement Kalluri’s:
In *[Atlas of...
"I LOVE Shortform as these are the BEST summaries I’ve ever seen...and I’ve looked at lots of similar sites. The 1-page summary and then the longer, complete version are so useful. I read Shortform nearly every day."
Jerry McPheeHao argues that the right question to ask about any AI system isn’t whether it’s good or bad, but whether it concentrates or distributes power. This exercise invites you to apply that framework to the AI tools in your own life.
Think about an AI tool you use regularly—perhaps a chatbot, a writing assistant, or a search engine. Hao argues that the labor and resources that went into building it were deliberately kept invisible. When you use this tool, how much do you know about who built it, whose data trained it, and under what conditions? What would you want to know about how it was developed?