PDF Summary:The Precipice, by

Book Summary: Learn the key points in minutes.

Below is a preview of the Shortform book summary of The Precipice by Toby Ord. Read the full comprehensive summary at Shortform.

1-Page PDF Summary of The Precipice

While the idea of apocalypse has featured in fictional stories for centuries, philosopher Toby Ord argues that today, existential catastrophe is a real threat. Humanity has reached a point where our actions and technologies pose a serious risk to our survival. Ord argues that if we want to survive and thrive, we must act now—understand the risks we face and take action to prevent, mitigate, and endure them. If we succeed, we could achieve an idyllic future of equity, health, and prosperity for our species.

In this guide, we’ll first discuss humanity’s future and potential for disaster. Then, we’ll present the three major types of risks Ord says we must understand—natural risks like asteroids, man-made risks like nuclear war, and future risks like advanced artificial intelligence. Finally, we’ll discuss what we can do to prevent these disasters and safeguard our future. In our commentary, we’ll supplement Ord’s discussion with opinions from other experts and discuss measures being taken to mitigate existential risks.

(continued)...

Climate Change

Ord explains that the second major man-made effect that threatens an existential catastrophe is climate change. Our atmosphere and its balance of gasses provide the planet with a stable temperature and pressure that allows life to thrive. When this balance gets thrown off, for example by human carbon emissions, the effects could eventually make Earth uninhabitable.

First, Ord explains that climate change could produce an existential catastrophe due to “feedback effects”—phenomena that accelerate the current rate of global warming. For example, as the planet warms, arctic permafrost (a layer of frozen rock and soil that contains massive amounts of carbon) melts and releases its stored carbon. This release would further accelerate warming.

(Shortform note: NASA scientists provide additional insights into these feedback processes. They explain that in addition to human actions, natural effects like precipitation, clouds, ice albedo (reflection), forest growth, and water vapor can all create feedback in our climate system. Further, not all feedback is positive (making things warmer)—for instance, increased cloudiness, a negative feedback, could potentially slow down global warming by reflecting sunlight back into space.)

Ord explains that another way climate change could lead to an existential catastrophe is if humans simply release more emissions. Our emissions are estimated to double and raise the global temperature by up to 4.5°C by 2100. A warming of 6°C would produce extreme effects, and a warming of 13°C—which Ord says is very possible by 2300—would likely produce existential catastrophes.

(Shortform note: As of 2024, the European Environment Agency offers hope that Ord’s grim predictions may be mitigated somewhat. Their research predicts an expected decrease in greenhouse gas emissions in the European Union (EU) over the coming years, particularly in power and residential sectors. However, international aviation and shipping continue to grow their emissions, showing that even with these reductions from certain sectors, more effort is necessary for achieving the goal of climate neutrality by 2050.)

At an increase of 13°C, says Ord, we’d face reduced agricultural yields, sea level rises, water scarcity, increased tropical diseases, and ocean acidification. Most critically, it would lead to heat stress—in certain regions, the climate and humidity would be so high that we’d be unable to cool our bodies by sweating, and therefore wouldn’t be able to survive. Making matters worse, the regions that would become uninhabitable house half the population and produce much of our food—other regions would become overcrowded due to forced migration, and our food supply would be extremely reduced.

(Shortform note: Scientists elaborate on Ord’s claim, specifying where heat stress is likely to occur and noting that it’s already beginning to happen. Regions with rapid growth and less capacity to adapt like Sub-Saharan Africa, South-East Asia, and Latin America will be hit the hardest. Further, we’re already seeing effects of global warming in these areas, leading to a decrease in food production—some days it’s too hot to work outdoors and this not only affects food production but local economies. There are also other effects Ord doesn’t discuss such as impaired cognition and concentration in children.)

Environmental Damage

While many people believe that environment-damaging issues like overpopulation and resource depletion threaten the survival of humanity, Ord says these really aren’t areas of concern—the rate of population growth has been decreasing since the 1960s, and science shows we have more than enough resources. The most concerning form of environmental damage to our survival as a species is biodiversity loss.

(Shortform note: While Ord notes that the population isn’t a threat to our survival because it’s declining, this decline might be threatening for another reason. Some analysts contend that population decline leads to a lack of innovation, and we’re going to need breakthrough innovations to figure out how to prevent, mitigate, and survive the catastrophes Ord discusses. This lack of innovation would also negatively impact the global economy.)

The current rate of species extinction is significantly higher than the historical average. It's difficult to tell whether this signifies the early stages of a mass extinction, but the phenomenon is deeply troubling nonetheless. While less extreme than species extinction, population losses and regional disappearances of certain species severely impact ecosystem functions that benefit humans like water purification or enhancement of soil quality. While it’s hard to say, Ord says that at some point these issues could impact food production leading to an existential catastrophe.

Scientists’ Views on Biodiversity Loss and Population Growth

Scientists agree on the pressing issue of biodiversity loss due to human activities; however, while Ord is unsure whether this indicates mass extinction, many firmly argue that we’re already facing the sixth mass extinction.

Further, while Ord mentions population growth and resource depletion as less pressing than biodiversity loss, other scientists contend that human population growth, particularly since the agricultural and industrial revolutions, is one of the main causes of the unprecedented extinction rates at hand. Population growth means more humans to feed, clothe, and house. This development contributes to biodiversity loss not just through pollution and climate change, but also through habitat destruction (for houses, stores, factories, and so on), overhunting and overfishing to feed the population, and the forced relocation of invasive species to new ecosystems.

Potential Future Risks

Ord warns that we must also take into consideration future technologies that could pose an existential risk to the species. There are many catastrophes that could befall us, but Ord says the three most likely ones are pandemics, misaligned artificial intelligence (AI), and dystopian futures.

Pandemics

Ord explains that our vastly increased population and unhealthy farming practices have created many more opportunities for new human diseases to develop that originate from animals—for example, the H1N1 virus (swine flu). Further, modern means of transportation allow viruses to travel faster across the globe.

Ord notes that the development of dangerous pathogens also poses a serious risk to humanity. Pathogens and viruses created in experimental labs could potentially leak into society and cause a pandemic. Or governments could use them as bioweapons and the diseases could spread across the globe.

(Shortform note: In addition to the intentional development of dangerous pathogens, experts warn that our impact on the environment could also lead to the emergence of new dangerous pathogens. For example, melting ice caps and wild fires could release previously dormant or contained viruses that our bodies don’t know how to fight. Further, change in environmental conditions due to global warming could disturb the balance of microbes in and on our bodies. This could allow harmful bacteria to thrive or evolve into more virulent strains, increasing the risk of infections. Ultimately, we need to consider all present factors and take a broader approach to prevention and protection against the risk of pandemics.)

Artificial Intelligence

Another potential future threat to humanity that Ord discusses is misaligned artificial intelligence—AI that has surpassed human intelligence and isn’t aligned with human values. If this happens, we risk losing control over our fate to AI, similar to the way less intelligent species’ fates are largely dictated by humans. According to Ord, top researchers say there’s a 50% chance of AI intelligence surpassing humans by 2061 and a 10% chance by 2025.

If AI reaches this level and isn’t aligned with human values, it would be hard to regain control over society. AI may fundamentally change society by taking control of financial institutions and other resources, reorganizing power structures, and installing societal priorities that don’t align with human values. For example, we could live in a world with constant war if an AI is programmed to ceaselessly acquire more resources for its society without regard for morality or justice.

Further, Ord says the AI would likely resist shutdown by saving copies of itself across the internet. Because of its superior intelligence, it would likely also be able to manipulate humans into assisting in its mission through words and images.

The Threat of Artificial Intelligence

There have been exponential developments in AI technology since The Precipice was published in 2020. A 2024 report commissioned by the US State Department reiterates all of the claims Ord makes about misaligned AI but with greater accuracy and additional detail based on recent AI developments. Experts believe there’s between a 4% and 20% chance of AI causing global and irreversible effects in 2024 alone.

The report specifies that AI development could destabilize global security and result in a competition similar to the nuclear arms race, becoming a weapon of mass destruction. One way it could do this, for instance, is by bringing down the entire power grid of North America, crippling infrastructure. It could also soon be capable of orchestrating disinformation campaigns that could destabilize society, weaponizing robotics (such as by launching drone attacks), and weaponizing biological and material sciences that could cause pandemics.

The researchers note that the likelihood of scientists losing control of the AI technology they develop increases due to lack of safety and precautions that have been observed in labs. Like Ord, they note that if AI is able to break free from human control, we would lose the battle as it would be nearly impossible to shut down. One solution the researchers offer to prevent this outcome is creating emergency safeguards and limits on the amount of computer power allowed to train AI models. This way, we’ll prevent programs from getting powerful enough to be autonomous.

Dystopian Futures

Dystopian futures don’t necessarily precipitate human extinction, but they’re catastrophic because they’ll permanently prevent us from reaching our full potential—we’ll become locked into a state that we can’t escape from. Ord says there are three main dystopian futures we’re at risk of encountering.

The first type of dystopian society, which Ord calls enforced dystopias, are undesired by the masses and enforced by a small totalitarian regime. This society could rise by indoctrinating civilians and suppressing dissent with the help of AI. The second type of dystopian society, called undesired dystopias, are undesired by the masses but a result of human actions. For example, the capitalist race for lowest costs and highest income could lead to a world where most of the population lives in bleak conditions with extremely low wages. The third type of dystopian future, called desired dystopias, are desired by the masses, but only because they blindly believe certain ideologies and don’t know the potential they’re throwing away. For example, we may let disease and illness run rampant by believing treatment is immoral or unnatural.

Examples of Dystopian Futures

Stories from writers and filmmakers can help us understand the possibilities of a dystopian future.

For example, George Orwell’s 1984 is a prime example of an enforced dystopiaa totalitarian regime is in control of society and has brainwashed citizens into total obedience. Some forms of control include the creation of a new language called Newspeak, which is designed to limit free thought and promote regime doctrines—due to language limitations, people are literally unable to express dissent.

A thought experiment by Scott Alexander exemplifies an undesired dystopia: a society where everyone is forced to electrically shock themselves for eight hours a day. If someone doesn’t follow this rule, a second rule states that everyone else must unite to kill the rule-breaker. Although no one wants to shock themselves, no one breaks the rule because they know they would be killed.

An example of a desired dystopia can be seen in the Netflix series Brave New World, which is based closely on Aldous Huxley’s novel but features additions to the story such as AI. In the series, an AI program controls society—emotions are conditioned out of humans in childhood, they are fed pills to control their emotions, monogamy is forbidden, women no longer carry children, and children are instead created in labs and raised collectively. The society is devoid of core human emotions like love and passion and all virtues other than gratification. Their society is stagnant and their lives devoid of feeling. However, they think they’ve reached peak happiness because they don’t know anything better is possible.

Part 3: Preventing Existential Catastrophes

Now that we understand the risks we face, Ord explains that we need to employ strategies to prevent them from materializing. In this section, we’ll explain Ord’s strategy for understanding the development of existential risks and his ideas about how humanity can take steps toward not only preventing them, but ensuring we reach our full potential.

The Development of Existential Risks

Ord explains that the first step in preventing existential risks is understanding their stages of development so we can intervene. There are three stages of an existential threat: initiation, growth, and completion. Initiation is the start of the issue—the result of either natural or anthropogenic causes. Growth is the means by which the problem scales in size (becoming a global problem). Completion refers to how, exactly, the problem results in extinction or permanent limitation of human potential—for example, by famine and population reduction.

To prevent initiation, we need to consider potential risks and their underlying causes—natural or anthropogenic—so we can take actions. For example, we would want to prevent another world war because that would greatly increase the potential for nuclear war and environmental damage. If we fail to do so and catch the catastrophe in its growth phase, we must develop responses to limit its spread. If we catch it in the completion phase, we must focus on survival strategies—ways we can endure the effects that would otherwise result in existential catastrophe.

Existential Catastrophes and Possible Interventions

Researchers from Effective Altruism illustrate the ideas that Ord discusses with examples. To increase our understanding of ways catastrophes can develop and how we can intervene, let's take a look at their explanation of how extinction could occur due to a pandemic from biological weaponization.

First, the initiation stage of this catastrophe would be a researcher creating a new pathogen and information on the pathogen being published to spread public awareness. In the growth stage, an entity (like a government) would then use that information to produce and release the pathogen, causing a pandemic. The completion phase would entail events like population loss and the breakdown of states and trade, then extinction.

To prevent the initiation stage from occurring, the following steps could be taken: 1) prevent/restrict funding for such research by raising funders’ awareness of the risks, 2) improve researchers’ choices on what to study and publish, 3) change incentives and norms among research journals around what to publish.

To prevent the growth phase, the following steps could be taken: 1) limit access to biotechnology so entities can’t reproduce pathogens, 2) improve systems for detecting pathogens and developing vaccines.

To prevent completion, we could utilize refuges, and research and develop alternative methods of food production that could feed a population with dwindling workers.

Prioritizing Prevention

Ord says we also need to develop a strategy to determine which risks to focus on first, and which to put most of our time and resources into. For example, we should prioritize risks that precipitate other risks because they have a higher chance of resulting in an existential catastrophe—for instance, nuclear war might not kill us off, but the environmental damage that occurs as a result might.

(Shortform note: Existential risk researchers have combined Ord’s research with that of other scholars to create a list of how we should prioritize existential risks. Presently, the biggest threat is unaligned AI followed closely by a human-developed pandemic and other potential man-made risks (for example, if we develop dangerous nanotechnology in the future). Next is nuclear war, climate change, and environmental damage. Following this is a natural pandemic, and then a supervolcanic eruption. At the bottom of the list is flood basalt (another type of volcano that floods vast areas of the earth with lava), an asteroid, or a supernova.)

In these calculations, says Ord, we should also consider actions or factors that reduce the potential of certain risks. For example, decreasing global economic disparity may not seem like a direct prevention of nuclear war, but it contributes to mitigating the risk by making countries more stable, self-sustaining, and peaceful.

(Shortform note: The Simon Institute for Longterm Governance, which we’ll discuss more in later commentary, has developed a comprehensive list of actions that can be taken to reduce existential risk by 2030. The two main priorities of these actions are to 1) improve citizens’ and governing officials’ understanding of existential risks, and 2) improve the way governing bodies handle existential risk (how they contribute to research and prevention). The list includes actions such as 1) creating a “global risk register,” which prioritizes risks according to severity, probability, and origins; 2) formulating best practices at national, regional, and international levels; and 3) increasing investments in AI safety.)

Preventing Catastrophe and Safeguarding Our Future

According to Ord, preventing existential catastrophes and safeguarding the potential of humanity requires change on a large and small scale. On a large scale, governments worldwide need to work together and create policies and projects to mitigate current risks, prevent future ones, and work toward a brighter future. Currently, most nations rely on other nations to engage in prevention while they continue with business as usual, ultimately increasing the risk of a catastrophe occurring due to negligence.

Instead, Ord suggests we form international institutions focused on prevention, in which all nations participate. We should also strengthen existing institutions and policies dedicated to prevention such as the World Health Organization and nuclear arms treaties. Further, Ord emphasizes the importance of prioritizing an increase in human understanding before an increase in technological advancement—in other words, before developing a technology, we must take time to understand the full scope of potential consequences, good and bad. If we can’t do this, it may be necessary to slow our technological advancement.

However, technological advancement is still important. We need to increase our scientific knowledge so we can develop methods and technologies that can help us mitigate risks. For example, establishing human life on another planet would provide a failsafe to preserve the species, even if we’re able to survive until the end of the sun’s lifespan.

(Shortform note: Scientists have been searching for Earth-like planets for years. In 2023, they found a close match: a planet called Giese 12 b. It’s about 40 light years away and has a warm surface temperature of about 42°C. Scientists are searching for more details about the planet like whether it has an atmosphere. However, even if Giese 12 b is a match, we still need to develop the technology needed to survive the 40-light-year journey there.)

We must also dedicate more time and resources to researching existential risks—discovering new ones, assessing the likelihood of different risks occurring, and determining how we can prevent, mitigate, and survive them. We must also spend more time researching how to make a better future for generations to come so we can reach our potential as a species.

Mitigation Progress Since The Precipice

Since the book was published, many, if not all, of the recommendations Ord makes on how to mitigate existential threats are being advocated for and executed with the help of the Simon Institute for Longterm Governance—an organization founded in 2021 to help multilateral organizations think long term. They make technical information actionable so officials can incorporate them into policy and decisions, as Ord suggests. They’ve contributed to major intergovernmental mitigation efforts, are continuously producing new research on existential risks, and have coordinated multiple workshops to encourage collaboration and action among researchers and policy makers.

In 2023, the Simon Institute released a paper on how governments can join together to reduce existential risk by 2030, listing three multilateral pathways and 55 milestones to pursue, 12 outcomes to reach, two priority instruments to develop, and 30 actions to implement (discussed in the previous Shortform note).

The multilateral pathways target general existential risk, biosecurity, and AI governance and include milestones like holding conferences, creating certain treaties, and forming new committees.

The outcomes listed in the report include increasing understanding of existential risk, management of risks, investments for risk reduction and resilience, and risk preparedness and response. On a practical level, this means encouraging collaboration between UN agencies to better tackle certain risks and having national governments dedicate a percentage of their budget to existential risks.

The two priority instruments listed in the paper are 1) an international plan and agreement on who needs to do what to prevent and prepare for existential risks, and 2) mechanisms to direct more funding to low-probability, high-impact risks—risks that, while unlikely, will be disastrous.

Ord says that on a personal level, you can contribute to the cause by using your education and career specialities to help with research and prevention. If you’re not in a relevant field, you can donate to research centers, stay informed, and raise awareness through word of mouth.

(Shortform note: Other experts offer more specific advice on how the average person can get involved in existential risk prevention through their career. First, think about the kind of risk you want to focus on—for example, direct risks like pandemics or risk factors like global political instability. Then, brainstorm careers that can address these issues—for example, research or government policy. Finally, weigh your career options to determine the best one for you and identify the steps to pursue that career path.)

Humanity’s Potential

Ord explains that if we’re able to prevent existential catastrophes and meet our full potential, the human species could have a long and happy future. Given thousands of years without adding pollution to the atmosphere, we could see the Earth heal itself—oceans and forests would be restored, excess carbon dioxide would be removed from the atmosphere, and biodiversity may even return. We could potentially even survive long enough to witness an interplanetary migration once our sun dies in eight billion years.

(Shortform note: If we’re not able to restore Earth and an interplanetary migration is required to save the species, Mars may be a viable option. Elon Musk and SpaceX hope to put a million humans on Mars by roughly 2045. The city plan calls for a large communal dome with smaller domes surrounding it where people would live. In one interview, Musk proposed heating the planet’s climate through a series of thermonuclear explosions that would create artificial suns. With time, Musk says it’s even possible we’d want to use bioengineering to create a new human species more suitable for life on Mars.)

Want to learn the rest of The Precipice in 21 minutes?

Unlock the full book summary of The Precipice by signing up for Shortform .

Shortform summaries help you learn 10x faster by:

  • Being 100% comprehensive: you learn the most important points in the book
  • Cutting out the fluff: you don't spend your time wondering what the author's point is.
  • Interactive exercises: apply the book's ideas to your own life with our educators' guidance.

Here's a preview of the rest of Shortform's The Precipice PDF summary:

Read full PDF summary

What Our Readers Say

This is the best summary of The Precipice I've ever read. I learned all the main points in just 20 minutes.

Learn more about our summaries →

Why are Shortform Summaries the Best?

We're the most efficient way to learn the most useful ideas from a book.

Cuts Out the Fluff

Ever feel a book rambles on, giving anecdotes that aren't useful? Often get frustrated by an author who doesn't get to the point?

We cut out the fluff, keeping only the most useful examples and ideas. We also re-organize books for clarity, putting the most important principles first, so you can learn faster.

Always Comprehensive

Other summaries give you just a highlight of some of the ideas in a book. We find these too vague to be satisfying.

At Shortform, we want to cover every point worth knowing in the book. Learn nuances, key examples, and critical details on how to apply the ideas.

3 Different Levels of Detail

You want different levels of detail at different times. That's why every book is summarized in three lengths:

1) Paragraph to get the gist
2) 1-page summary, to get the main takeaways
3) Full comprehensive summary and analysis, containing every useful point and example