PDF Summary:Empire of AI, by

Book Summary: Learn the key points in minutes.

Below is a preview of the Shortform book summary of Empire of AI by Karen Hao. Read the full comprehensive summary at Shortform.

1-Page PDF Summary of Empire of AI

Artificial intelligence is depicted as the most disruptive technology of our era—a tool to cure disease, accelerate scientific discovery, and usher in an age of abundance—but the workers who built ChatGPT earned less than $2 an hour. The communities powering its data centers are competing with servers for clean drinking water. The artists whose work trained its image generators are losing their careers to tools built on their work. And the six largest technology companies collectively gained $8 trillion in value the year ChatGPT launched.

In Empire of AI, investigative journalist Karen Hao documents how this happened and argues that the word for it isn’t “disruption”—it’s “empire.” In this guide, we’ll trace OpenAI’s growth, who pays for it, how it protects itself from accountability, and what a different AI future might look like. We’ll also examine where Hao’s reporting has been corroborated by other journalists, where her empire metaphor finds support in economics and Indigenous thought, and what has changed since her book’s 2025 publication.

(continued)...

Hao explains that once OpenAI acted on the scaling doctrine, it proved self-fulfilling: As each new model outperformed its predecessor, it attracted more investment to fund the next, larger one—proof, in the industry’s eyes, that scaling worked. But the scaling doctrine also had a fateful practical implication. Rather than training models on carefully curated datasets, OpenAI began to feed its systems the raw internet: billions of web pages scraped without discrimination. This produced models of unprecedented capability, but also meant the training data was riddled with toxic, biased, and copyrighted material. Managing that contamination required an extensive human cleaning operation—and, as we’ll see in the next section, an exploitative one.

(Shortform note: The scaling doctrine isn’t purely speculative. Researchers have formalized a set of “scaling laws” to predict how much a model’s performance will improve if you make it bigger, feed it more data, or train it with more computing power. These laws work: Each new, larger model does perform better than the one before. But scaling laws predict performance on benchmarks, not the emergence of AGI. The jump from “larger models score better on tests” to “larger models will become as smart as humans” is a leap of faith. And because the financial architecture of the AI industry now depends on that leap panning out, the companies making it have every incentive to keep scaling—and to treat the resulting costs as the price of progress.)

The Empire’s Cost—and Who Pays

OpenAI’s scaling doctrine demanded something more costly than computing infrastructure: the extraction of human labor, physical resources, and creative work on a giant scale. Hao argues that, like historical colonial powers, AI companies have built a global system where value flows in one direction (from vulnerable communities toward a small elite) while the costs of that extraction remain invisible. The people bearing the burden are the same ones who’ve been subject to colonial extraction in the past: workers in crisis economies, Indigenous communities, and countries whose development was shaped by cycles of plunder. Here, we’ll examine the three forms of extraction that built the AI empire and what they cost.

(Shortform note: Scholars agree with Hao that we haven’t left the era of empires behind—we’ve just changed what they extract and how they justify it. In The Divide, Jason Hickel argues that Western powers never stopped extracting wealth from the Global South; they just developed subtler ways to do it: predatory loans, trade rules that favor wealthy nations, and institutions that dictate economic policy to indebted countries. Hickel argues that many countries survived centuries of plunder only to be shaped into economies that are always vulnerable to the next round of extraction—so modern powers like the AI industry inherit a global system structured to deliver cheap labor, lax regulations, and access to land and water as companies demand them.)

The Labor Supply Chain

First, the decision to train AI models on the bulk of the internet’s data necessitated massive amounts of human labor. Scraping billions of web pages without discrimination meant the training data was saturated with toxic, violent, and sexually abusive content. To build consumer products from this material, OpenAI and other AI companies needed workers to read through it and classify it in fine-grained detail—building the content filters that would make the models safe for public use. They also needed workers to perform repetitive tasks that teach models how to respond helpfully to humans: rating outputs, writing example answers, and correcting errors. None of this could be automated. It all had to be done by hand, at scale.

(Shortform note: Most of what OpenAI’s models learned, starting at least as early as GPT-3, came from Common Crawl, a free archive of pages scraped from across the public web to let researchers study internet-scale data. For OpenAI, the economics were irresistible: Building a comparable dataset would have taken years and cost millions, while Common Crawl’s data was ready and waiting. The catch is that Common Crawl’s archive was never cleaned up. In fact, its maintainers made a principled choice not to clean it up: A researcher studying hate speech or extremism needs that material preserved, not filtered out. When an AI company turns to that archive to train a chatbot, all of the filtering work falls on them to complete—or to outsource.)

Hao reports that to source the labor needed to clean the data, the AI industry developed a strategy: Enter collapsing economies where people would work at any price. When Venezuela’s economy imploded in 2016, AI data firms flooded its labor market. When workers burned out or began organizing, the firms moved on. By late 2021, when OpenAI contracted Sama to build a content-moderation filter for GPT, the pattern was established: Identify distress, extract labor, and repeat. Workers earned one to three dollars an hour, and were bound by nondisclosure agreements. Scale AI, one of the primary contractors, was valued at billions of dollars, but the workers who built that value had no equity, no benefits, and no job security.

(Shortform note: The pattern Hao documents is the same one Naomi Klein identifies in The Shock Doctrine. Klein argues that economic and political crises create windows when ordinary protections (like labor laws, democratic debate, and the ability of workers to negotiate) temporarily break down. Companies that want to operate on terms that would otherwise be politically impossible learn to move quickly during those windows. Seen this way, the AI industry’s labor strategy might be seen as the continuation of a pattern of corporate behavior where anything that would slow expansion gets treated as an obstacle to be worked around rather than a responsibility to be met.)

The psychological costs of this labor were severe. Workers tasked with content moderation had to read and categorize child sexual abuse material and graphic violence scraped from the internet—as well as similar content that OpenAI prompted its models to generate so they’d have a broader range of examples to classify. Workers developed trauma responses similar to those seen in social media content moderators. They had no meaningful mental health support and no way to discuss their work publicly, and the technology they helped build then compounded their losses: As companies adopted AI tools to replace human workers, the products built on this psychological toil destroyed economic opportunities in the communities that had produced it.

What OpenAI Already Knew

By the time OpenAI was contracting content moderation work to firms like Scale AI and Sama, the psychological costs of this kind of labor were already public record. In 2018, Selena Scola, who had developed PTSD while moderating content for Facebook, filed a lawsuit that eventually grew to cover more than 14,000 moderators and was settled for $52 million. Sarah T. Roberts’s Behind the Screen, a study of those who do this kind of work, was published in 2019, and clinical research on their symptoms was well underway. Given that evidence, observers say the nondisclosure agreements these workers sign deserve particular scrutiny.

An investigation of Sama’s Nairobi operation found that workers didn’t know they’d be doing content moderation until after they’d signed contracts with confidentiality clauses. The effect was isolation: They couldn’t speak about their work even to their families, and concerns about confidentiality kept them from using mental health services. Annotation and moderation workers have few opportunities for advancement, are made redundant by the tools they train, and gain no transferable credentials. What remains after the contract is the psychological damage, without career capital that might help them move forward.

The Environmental Costs

Second, the infrastructure required to execute the scaling doctrine is itself a form of resource extraction that communities are currently fighting. Data centers must operate continuously—they can’t rely on intermittent renewable energy—so most run on fossil fuels, extending the lives of coal and gas plants that were slated for retirement. Data centers also require fresh water for cooling: Other water sources cause equipment corrosion or bacterial growth. Building the hardware itself—the computer chips and servers—requires lithium and copper, minerals extracted from the earth at significant environmental cost.

(Shortform note: The costs Hao describes aren’t a price paid to reach some end state—they’re a permanent feature of the scaling paradigm. AI reproduces a pattern economists first described in the 19th century: Efficiency makes a resource cheaper to use, which drives consumption up, not down. Inference (what happens every time someone types a question into ChatGPT) uses 60% of AI’s total energy, and each new generation of models requires more GPUs running in parallel than the last. Even if OpenAI reached its finish line tomorrow, the bill would keep growing. When companies release smaller models, they don’t replace the older ones, but get used for so many tasks that the volume of queries swamps any per-query savings.)

A single query to ChatGPT—the public chatbot OpenAI launched in 2022—consumes 10 times the electricity of a Google search. Hao explains that AI infrastructure plans would require adding the energy equivalent of multiple Californias to the power grid. At the same time, most new data centers are built in water-scarce areas, which makes communities compete with computing infrastructure for access to drinking water. This played out in Cerrillos, a suburb of Santiago, Chile, where a proposed Google data center would consume water at a thousand times the rate of the local population. Chile also supplies lithium and copper for AI hardware, bearing multiple extraction costs for AI development.

(Shortform note: Hao corrected an error in her math that suggested the Cerrillos data center would consume a thousand times the local population’s water—the actual figure was “more than 100%.” But that figure still means that a single facility would outcompete an entire city.)

Where Does the Water Come From?

Hao’s “10 times a Google search” figure describes electricity, but the water usage picture is murkier, depending on what you count. Sam Altman has claimed that an average ChatGPT query uses about “one-fifteenth of a teaspoon of water,” and Google’s 2025 technical report landed on roughly five drops for a Gemini prompt. But those numbers only measure water evaporating off the servers themselves. Independent researchers also account for the electricity side of the equation—the water that evaporates at coal and gas plants that power the servers—and estimate each query uses from a few milliliters to more than 50. By one estimate, AI systems globally used hundreds of billions of liters of fresh water in 2025.

The problem isn’t just the volume of water, but where that water comes from. Two in three new AI data centers are landing in places where the water supply is already under strain, and that’s not an accident. Cheap land, cheap power, and minimal regulatory friction tend to cluster in communities least equipped to push back. Meanwhile, the crisis these facilities are being plugged into is bigger than most of us realize: The UN projects a 40% gap between global freshwater supply and demand by 2030, and half the planet’s population already lives through serious water shortages at some point during the year. A new, industrial-sized water consumer in Santiago or Arizona isn’t just tapping a resource—it’s accelerating scarcity.

The Extraction of Creative Work

Third, the AI industry’s extractive reach extended to the creative economy through a claim Hao sees as an imperial one: that the world’s creative output—the English-language internet, hundreds of thousands of copyrighted books, billions of images—was available to be scraped and used as training data without consent or compensation. AI companies argued this constituted legal fair use.

What Counts as Fair Use?

The question of whether AI companies can legally train their models on copyrighted work without permission is the subject of more than 50 pending federal lawsuits, one of the most prominent of which is The New York Times’s suit against Microsoft and OpenAI. The central issue is whether scraping the internet for data to train a model counts as fair use: Courts have said fair use of copyrighted work must create something that is “transformative,” or comments on or refers back to an original work. In practice, this means the use of copyrighted work is most defensible when a model’s outputs are genuinely novel, and far less defensible when it reproduces a creator’s style or competes with the works it was trained on.

Hao frames this as a theft of labor and income, not fair use, but there’s a deeper loss as well: the displacement of human expression. An LLM doesn’t write the way a person does, but by predicting the next most probable word. It converts language into coordinates and identifies patterns in how those numbers relate to each other, operating at the level of probability, not meaning. When LLMs’ outputs flood markets, they don’t just undercut human creators on price. They blur the line between expression and imitation, making it harder for audiences to tell the difference and harder for creators to sustain the work that produces the real thing.

Hao explains that for the artists, writers, and creators whose work was taken, the consequences were immediate. Entire industries were hollowed out by tools trained on the work of the people those tools displaced, and what had been middle-class careers were upended. (Shortform note: Hao’s observation that creative careers have been undercut by AI reflects a specific choice, not an inevitable outcome of AI. In Power and Progress, Daron Acemoglu and Simon Johnson argue that the same technology can be built to augment workers or to replace them—which suggests AI has been deliberately pointed toward replacing creative workers. The result is that middle-class creative work is being restructured into precarious, lower-paid, AI-adjacent labor.)

How the Empire Protects Its Power

Hao argues that AI’s extraction of resources and creative work couldn’t have continued without three protections that kept its operations invisible: control over what could be known about AI firms’ technology, influence over the rules they operated under, and suppression of dissent from within. Without the first, critics could have documented the harms and acted on them. Without the second, those harms might have faced meaningful oversight. Without the third, insiders could have blown the whistle.

Closing Off Knowledge

Hao argues that early AI research operated on a principle of openness: Researchers published their methods, shared their data, and subjected their findings to peer review. That norm collapsed as AI became commercially valuable. Companies stopped publishing meaningful technical details about their models, instead treating their architectures, training data, and performance limitations as proprietary secrets. This meant that users couldn’t know how much energy their queries consumed, regulators couldn’t assess whether models were safe for medical or legal decisions, and independent researchers couldn’t verify the claims companies like OpenAI, Google, and Anthropic made in their press releases.

AI Giants’ Approach to “Openness”

Other experts agree with Hao’s portrait of a closed-off AI industry. Researchers have tracked AI transparency since 2023, evaluating what companies disclose. The average transparency score fell between 2024 and 2025, and while the index’s 2023 ranking was led by Meta and OpenAI, both dropped to the bottom by 2025. But a counter-movement has pushed in the opposite direction, illustrated by how models are shared. A trained model is a file of numbers: billions of parameters, called “weights,” that encode what it’s learned. OpenAI and Anthropic keep their weights behind a paid interface, but other firms release weights through platforms like Hugging Face, where anyone can download them to study and build on the model.

Meta’s Llama family, Mistral’s models, and DeepSeek-R1 are distributed this way. R1’s release was particularly jarring because it challenged the idea that frontier AI requires the massive scale the closed labs insist on—though specifics of its training remain contested. Still, open weights aren’t the same as full openness: Meta releases model weights but doesn’t disclose training data. That’s a narrower form of openness than AI once operated on—and it leaves intact the silence on environmental, labor, and data questions that Hao calls out.

Hao cites Google’s firing of AI researcher Timnit Gebru as an illustration of what this shift meant. Gebru coauthored a paper warning of four risks of large language models: huge environmental costs, biased training data, datasets too vast to audit, and outputs so fluent that users mistake statistical pattern-matching for understanding. Google executives deemed the paper a liability and forced Gebru out. Effectively, this made it normal for PR teams to hold veto power over scientists who wanted to publish their findings. Researchers who depended on industry funding quietly returned to work, and as AI research migrated from universities into corporations, the field’s intellectual agenda narrowed around what was commercially useful.

(Shortform note: Gebru wasn’t the first to voice these concerns, but she raised them just as Google was betting on AI—and the company’s response followed a pattern seen in other sciences. For example, industry–funded drug studies are more likely to produce results favorable to pharmaceutical companies than independent studies: The funding shapes which questions get asked, which findings are published, and how results are framed. What looks like a scientific consensus may just be the consensus of researchers who depend on the companies they study. Hao argues that AI research operates under similar conditions—with the added problem that there are no regulators, no mandatory trial registrations, and no liability structure.)

Capturing the Rules

AI giants’ ability to direct the production of knowledge was reinforced by their control over policy. When Altman testified before the US Senate in 2023, he presented himself as a responsible actor who welcomed oversight. Hao explains that the regulatory frameworks that Altman and his allies proposed were built around the idea that the most sophisticated AI tools—and therefore the ones most in need of regulatory oversight—were the “frontier models,” computationally powerful systems defined by a specific (but arbitrary) threshold of computing resources. This directed regulatory attention toward managing the speculative future risks of hypothetical superintelligent systems rather than the present harms of systems already in use.

Hao reports that the proposed compute threshold was set just above the level OpenAI had used to train its most powerful model at the time—so the regulation would apply to OpenAI’s next-generation models but not to anything the company had already deployed. This meant the labor exploitation, environmental costs, and copyright violations could continue without scrutiny. Despite objections from independent researchers who noted that dangerous AI capabilities don’t require massive scale, this framing migrated into the Biden administration’s executive order on AI, the EU AI Act, and California’s proposed legislation. According to Hao, industry framing had effectively displaced independent scientific judgment in regulatory bodies on multiple continents.

How a Marketing Term Became a Legal Category

As Hao reports, lawmakers on three continents wrote OpenAI’s preferred vocabulary into their rulebooks, adopting terms like “frontier models” that don’t have settled definitions. Erik Larson argues in The Myth of Artificial Intelligence that the confusion that results from such usage has the effect of channeling research funding and policy attention toward hypothetical risks. At the same time, questions about how intelligence works—which might tell us which risks are real—go unfunded. When the science is this unsettled, whoever tells the most compelling story writes the definitions. If this sounds familiar, it should: We saw earlier how Altman used “AGI,” which he’s called “a very sloppy term,” to mean whatever the moment required.

“Frontier AI” makes the same move in the regulatory arena. The phrase was coined in a 2023 policy paper whose authors included OpenAI staff, and the UK government built a taskforce around the term within months. Under the resulting regulations, an AI model qualifies as “frontier” if it was trained using more than a specified number of FLOPs: a count of mathematical operations performed in training, which serves as a stand-in for computing power. The EU set its threshold at 1025 FLOPs, the US and California at 1026. But critics argue that computing power doesn’t map reliably onto what a model can actually do or what harms it can cause, and efficiency improvements quickly make any specific number obsolete.

Suppressing Calls for Accountability

The AI industry’s control over what people knew about it—and its capture of policy regulating it—were reinforced by its suppression of accountability. Hao writes that OpenAI’s nonprofit board, whose mandate was to hold the company to its mission, was displaced by commercial interests. Concerns built as Sutskever and others watched behavior they found disqualifying: Altman hid a product rollout from the board, owned a startup fund with conflicts of interest he didn’t disclose, and told different people different things about the same decisions.

Sutskever, who’d grown convinced that Altman was the wrong person to lead OpenAI toward AGI, brought his concerns to board member Helen Toner. In November 2023, the board acted to remove Altman, but the response from employees, investors, and Microsoft was overwhelming. Without allies willing to publicly defend their decision, the board couldn’t make its case without exposing the colleagues who had brought them evidence. A legal review confirmed the behavior these colleagues were worried about—but concluded they didn’t disqualify Altman from leadership. The report was never released. Altman was reinstated within five days, and the directors who’d tried to remove him were replaced.

OpenAI’s Accountability Sink

OpenAI’s 2023 crisis sounds like an interpersonal drama, but a more revealing question is structural: Why didn’t the mechanism Altman had pointed to as proof that OpenAI could be trusted with AGI actually work when it was needed? OpenAI’s nonprofit board had been created as a failsafe: Its directors owed their legal duty to the nonprofit’s mission, not to the investors in the capped-profit subsidiary. The mission came first if the two conflicted, and OpenAI’s charter declared that the company’s “primary fiduciary duty is to humanity.” On paper, this governance structure gave the board the authority to protect the mission. In practice, as Hao reports, the directors who invoked that authority were quickly replaced.

This is a pattern Dan Davies (The Unaccountability Machine) calls an “accountability sink,” a structure that negates negative feedback. Sutskever and others documented their concerns, but when the board acted on that feedback, every party with real leverage—employees worried about equity, investors with billions committed, OpenAI’s PR team—pulled the other way. In the aftermath, the public never got a substantive account of what the board had found. Feedback entered the system, but as Toner later argued, nothing came out the other side. This episode suggests that when a tech giant’s valuation depends on the story its founder is telling, the company has every incentive to shield the founder and protect the narrative.

A second wave of challenges followed in 2024. The head of OpenAI’s safety team publicly resigned, stating that the company’s promises to ensure safety had taken a backseat to its push to release products. Another researcher who left the company forfeited nearly $2 million in vested equity rather than sign a nondisclosure agreement. Hao contends the company’s response was to manage the narrative rather than address the underlying concerns. By the end of 2024, Altman moved to dismantle the governance structure that had made the board’s challenge possible: steering OpenAI toward a conventional for-profit model and removing a provision that would have curtailed commercialization once AGI was achieved.

Hao finds a coda to this pattern in the story of Altman’s sister Annie, who made allegations of childhood abuse which Altman and his family have denied. What Hao emphasizes is OpenAI’s response: Its chief communications officer raised Annie’s mental health history unprompted during a conversation with Hao, in a manner Hao reads as an attempt to discredit a potential source. OpenAI’s communications apparatus had become an instrument of reputation management: a tool for the empire to protect itself and the course it wanted to chart.

The Accountability Fight, Continued

Hao’s reporting ended in early January 2025, but the accountability story has kept unfolding. OpenAI’s non-disparagement agreements—contracts that prohibit employees from saying anything critical about a former employer—became a public scandal when it was reported that they were unusually aggressive: They bound signers for life, and employees who refused to sign could lose their vested equity, which often represented the bulk of their compensation.

The departures of senior safety figures have also continued. One of the most visible is Steven Adler, who used an October 2025 New York Times op-ed to argue against trusting OpenAI’s safety judgment. The restructuring Hao describes as underway was completed in October 2025. The for-profit arm was converted into a public benefit corporation (PBC), which requires directors to balance shareholder returns against OpenAI’s declared public purpose. Microsoft holds a 27% equity stake and has secured access to OpenAI’s technology through 2032, including any models that qualify as AGI, a reversal from their original partnership.

The communications pattern Hao describes has also continued. Altman’s sister Annie filed a civil suit in January 2025 alleging childhood sexual abuse. The family issued a statement denying the allegations, and Altman filed a defamation countersuit. A federal judge dismissed Annie’s complaint as time-barred in spring 2026, but allowed her to refile under a statute for childhood sexual abuse, while permitting Altman’s countersuit to proceed. In the meantime, independent reporting (including a New Yorker investigation and The OpenAI Files) has continued to document concerns about Altman from people who’ve worked with him.

Resistance to the Empire

Throughout the book, Hao argues that the AI industry has kept public debate focused on a question it can always answer to its advantage: whether AI is good for humanity. This directs attention toward what AGI could offer in a hypothetical future—and away from who controls or pays for it. Stanford researcher Ria Kalluri says we should ask a different question: Does AI concentrate or distribute power? Hao suggests this is the right question to ask now because it can be answered with evidence—and because it shifts the conversation from outputs and promises to ownership and power. Against this question, she says the current model fails.

Why “Is AI Good for Humanity?” Might Be the Wrong Question

Hao and Kalluri are making a move that’s common among scholars who study AI: asking who the technology answers to. Questions about power can be answered with reporting—you can trace a supply chain, follow a dollar, document which communities got consulted. Questions about AI’s benefits tend to dissolve into speculation about the future. Meanwhile, some scholars have proposed versions of the power question that complement Kalluri’s:

In Atlas of AI (2021), Kate Crawford argues that the most relevant questions about any AI system aren’t about what it can do—they’re about whom it serves, what the political economies of its construction are, and what its wider planetary consequences look like.

In Race After Technology (2019), Ruha Benjamin coined the term “New Jim Code” to describe how new technologies built on historical data can automate and disguise racial hierarchies—as when predictive policing algorithms trained on biased arrest records direct officers back to the same neighborhoods, or when hiring algorithms filter out résumés from historically Black colleges or women’s colleges. Her implicit question for any AI system: Does it deliver what it promises, or does it quietly recycle a familiar hierarchy?

In Automating Inequality (2018), Virginia Eubanks studies what happens when governments use algorithmic systems—from welfare eligibility algorithms to predictive models that flag families for child welfare investigations—to make decisions about poor Americans. She argues these systems give the middle class “ethical distance” from decisions that reshape other people’s lives. Her test for any automated system, AI or otherwise: Would you accept this system if it were pointed at you?

But Kalluri’s question also illuminates what an alternative might look like: AI developed with the consent of the communities whose resources it uses, fulfilling purposes those communities need, and governed by the people it serves. Hao argues this isn’t utopian. For instance, Te Hiku Media, a Māori radio station in New Zealand, built a speech-recognition tool to preserve te reo Māori, a language colonial authorities once tried to eliminate, using recordings contributed by 2,500 community members, two GPUs, and a modest open-source model. The project worked because it was designed for one specific purpose and governed on the principle that the data belonged to the people who gave it, not to whoever owned the computing power.

Hao argues that the path from here to there runs through contestation at every point in the AI supply chain. The empire’s power rests on a fiction: that the data, land, labor, and creative work it extracts are legitimately available for the taking. Communities that refuse this premise are asserting prior ownership over resources that the empire treats as free. Once that ownership is asserted, the empire must either justify its claims or negotiate. The current trajectory of AI was built through specific choices. Refusing the premise that those choices were inevitable is where a different future begins.

Is the Alternative to Extraction Really New?

Hao’s argument finds an intellectual companion in the work of botanist and Citizen Potawatomi Nation member Robin Wall Kimmerer. In Braiding Sweetgrass, Kimmerer lays out what she calls the Honorable Harvest, a set of principles drawn from Indigenous practice: Ask before you take, take only what you need, and never take the first or last. Use what you take well and give a gift in return. These guidelines are reflected in the Te Hiku framework Hao describes: The Māori radio station asked its community before building its AI tool, used only what community members chose to donate, built something the community actually needed, and returned the tool to the people who made it possible.

In other words, what Hao presents as a hopeful exception to Silicon Valley’s model, Kimmerer would recognize as a very old way of relating to resources, freshly applied to a new domain. Kimmerer’s later book, The Serviceberry, sharpens this into an economic argument: Market capitalism isn’t going to disappear tomorrow, but parallel “gift economies” based on reciprocity can be built alongside it—and over time, can shift the balance. That’s essentially Hao’s argument about how change happens, recast in ecological terms. In a way, Hao’s argument is a new application of a conclusion people have often drawn when they observe the power dynamics we create and imagine how we might construct a different world.

Want to learn the rest of Empire of AI in 21 minutes?

Unlock the full book summary of Empire of AI by signing up for Shortform .

Shortform summaries help you learn 10x faster by:

  • Being 100% comprehensive: you learn the most important points in the book
  • Cutting out the fluff: you don't spend your time wondering what the author's point is.
  • Interactive exercises: apply the book's ideas to your own life with our educators' guidance.

Here's a preview of the rest of Shortform's Empire of AI PDF summary:

Read full PDF summary

What Our Readers Say

This is the best summary of Empire of AI I've ever read. I learned all the main points in just 20 minutes.

Learn more about our summaries →

Why are Shortform Summaries the Best?

We're the most efficient way to learn the most useful ideas from a book.

Cuts Out the Fluff

Ever feel a book rambles on, giving anecdotes that aren't useful? Often get frustrated by an author who doesn't get to the point?

We cut out the fluff, keeping only the most useful examples and ideas. We also re-organize books for clarity, putting the most important principles first, so you can learn faster.

Always Comprehensive

Other summaries give you just a highlight of some of the ideas in a book. We find these too vague to be satisfying.

At Shortform, we want to cover every point worth knowing in the book. Learn nuances, key examples, and critical details on how to apply the ideas.

3 Different Levels of Detail

You want different levels of detail at different times. That's why every book is summarized in three lengths:

1) Paragraph to get the gist
2) 1-page summary, to get the main takeaways
3) Full comprehensive summary and analysis, containing every useful point and example