PDF Summary:Homo Deus, by Yuval Noah Harari
Book Summary: Learn the key points in minutes.
Below is a preview of the Shortform book summary of Homo Deus by Yuval Noah Harari. Read the full comprehensive summary at Shortform.
1-Page PDF Summary of Homo Deus
For millennia, humans struggled with three serious problems—famine, plagues, and war—which led to the deaths of millions of people and to the rise and fall of global empires. People coped with these problems and answered life's questions with religion. However, in the modern era, we no longer rely on prayer—we’ve mostly overcome these three problems through the development of technology and medical knowledge.
In Homo Deus, Yuval Noah Harari, a professor at the Hebrew University in Jerusalem, envisions a future in which technology replaces humanist ideals and liberal government. Dissecting the concepts of religion, immortality, and technology, Harari argues that the world of the future may be run by advanced algorithms and artificial intelligence, not human beings.
(continued)...
Religious narratives, including those spread by liberalism, contain three parts:
- Ethical judgments: statements that dictate what’s right and wrong, such as “murder is wrong.”
- “Factual” statements: statements that use religious text, history, or scientific perspective to create a fact, such as “God said thou shalt not kill.” Note: These statements aren’t always an objective fact. They often offer a perspective framed as fact. Examples of “factual” statements are: “Life starts at conception” or “Jesus Christ is the Son of God.” While these statements are factual to followers of the religion, they’re not provable by science.
- Guidelines: statements that combine ethical judgments and factual statements to guide followers in a particular direction, such as “Christians should be pro-life.”
As a religion, liberalism contends that freedom is more important than equality (ethical judgment) because human beings possess free will and a unique, singular voice (“factual statement”). Therefore, the government should value the individual perspectives of its citizens (guideline). However, recent scientific studies expose flaws in liberalism’s “factual” statement through research calling into question the two key liberal concepts: free will and individualism.
1) Free Will
For centuries, humans have believed they possess the power to make their own decisions. However, neuroscience and brain mapping research challenges the theory of free will.
The electrochemical processes in the brain are subconscious, meaning humans have no control over the neural system that creates thought or action. When external stimuli cause a reaction in the brain, the human body will naturally respond to the electrical and chemical interactions. For example, you don’t choose to get angry. Anger emerges naturally due to the body’s response to external stimulation.
These reactions can be either deterministic or random, but they’re never “free”:
- A deterministic reaction is the direct response of the brain to an external stimulus. For example, if you accidentally put your hand on a hot pan, the electrical signals in your brain will tell you to retract your hand.
- A random reaction is the result of an unpredictable event in the brain such as the decomposition of an atom or the misfiring of an electrical impulse. For example, your brain may accidentally cause you to shiver after randomly firing off an impulse.
2) Individualism
Liberals also believe in individualism, or that human beings have a singular, unique voice that leads them towards their true goals. However, researchers have discovered that human behavior has nothing to do with a “singular, unique voice.” Rather, human thought is dictated by the interactions between the two hemispheres of the brain, which create two versions of the human experience—the experiencing self and the narrating self:
- The experiencing self: Usually controlled by the right hemisphere, the experiencing self processes moment-to-moment information. Most people associate this “self” with instinct. For example, if you hit your head on a door frame, the experiencing self would cause you to grab your head, check for blood, and feel the pain of the impact.
- The narrating self: Usually controlled by the left hemisphere, the narrating self tries to rationalize past behaviors and justify future decisions. Most people associate this “self” with identity. For example, if you hit your head on a door frame, your narrating self may rationalize your clumsiness by attributing it to exhaustion while making you more conscious of the door frame for the next few days.
Both “selves” interact to create perspective and inform decision-making. The experiencing self can support or derail plans made by the narrating self. For example, if you decide to go on a diet, your experiencing self may not feel like cooking one night, leading you to order a pizza instead.
The narrating self, on the other hand, can frame in-the-moment experiences. For example, someone fasting before surgery is going to feel differently than someone fasting for religious reasons. While both parties are experiencing hunger, their narrating selves create perspectives that shape the way they respond to their hunger.
The Future of Liberalism
As the concepts of free will and individualism continue to be challenged, three potential developments could wipe out liberalism in the 21st century:
- The loss of military and economic usefulness
- The rise of decision-making algorithms
- The creation of the “superhuman”
The Loss of Military and Economic Usefulness
The first potential development is that technology will make humans unnecessary to the economy and military, leading political and economic systems to devalue the human perspective. Today, one drone specialist can do the job of a team of soldiers, and a mechanical arm can work the assembly line without tiring. Because of this, the masses won’t have as much to contribute to economic and political systems.
If machines replace humans, will the human experience have any value? Many experts argue that it won’t. In fact, some predict that intelligent computers may view humanity as useless and a threat to technological superiority, leading them to eradicate humanity entirely.
The Rise of Decision-Making Algorithms
The second potential development predicts that algorithms (rules applied by computers) will one day make choices for us. Liberalism relies on individualism and the belief that human beings know things about themselves that no one else can discover.
However, as technology continues to advance, researchers may be able to develop an algorithm that can process more information than the human brain can, allowing it to understand people better than they know themselves. If this occurs, people will start relying on external algorithms to guide their behavior instead of their internal voices. Eventually, as the algorithms receive more power and control, they may develop sovereignty, making decisions for themselves and manipulating humans to make particular choices.
The Creation of the “Superhuman”
The final potential development predicts that humanity will value the individual experiences of “superhumans” over those of the common man. The creation of “superhumans” will likely be the result of a small, elite group of humans upgrading their bodies and brains with biotechnology, creating a more powerful biological caste.
Liberalism can’t survive with biological inequality because the experiences of “superhumans” and humans will be inherently different and unrelatable. For example, if a “superhuman” has a chip implanted into their brain that allows them to access data from the internet, the way they experience the world will be completely different from that of the average human being.
The Future: Techno-Religions
If liberalism dies, other religions will emerge to take its place. Because of the increasing impact of technology, these will probably center around technology, creating a new form of belief: techno-religion. Techno-religions promise the guidance and salvation of traditional religions, but use technology to generate happiness instead of belief in celestial beings.
Techno-religions can be divided into two categories:
- Techno-humanism: The belief that Homo sapiens should use technology to create Homo deus, ensuring that humanity maintains superiority on Earth.
- Dataism: The belief that Homo sapiens have run their course and should pass superiority on to advanced algorithms.
Techno-Humanism
Techno-humanism maintains many traditional humanistic beliefs but accepts that Homo sapiens have no place in the future. Because of the rate of advancement with artificial intelligence, techno-humanists believe that humanity must focus on upgrading the human mind if it wishes to compete with advanced external algorithms.
The techno-humanist perspective is most closely related to the evolutionary humanists of the 20th century. However, where evolutionary humanists such as Hitler believed the superior human could only emerge through the use of selective breeding and the eradication of “inferior” beings, techno-humanists strive to achieve the next phase of evolution peacefully, using genetic engineering, human-computer integration, and nanotechnology.
The Human Traits of the Future
Historically, human traits have evolved naturally through changes in political and social settings. For example, ancient humans likely had an enhanced sense of smell they could use to hunt. However, modern humans no longer require a keen sense of smell to survive. Because of this, the areas of the brain that were once used to process smells have evolved to focus on problem solving, critical thinking, and comprehension.
In the future, humans will likely continue to evolve according to political and social needs, but in a more direct and immediate way. If techno-humanists are able to upgrade humanity, the people in charge of the technology will get to determine which traits are useful and which aren’t, then develop technology to improve or eradicate certain feelings or behaviors.
Threats to Techno-Humanism
Because techno-humanism is a humanist movement, it emphasizes the importance of human desire. However, technological progress intends to control human desire, not listen to it. For example, if researchers discover a way to easily regulate chemical imbalances in the brain, they could find a way to “turn off” mental issues such as depression and anxiety.
However, if this technology fell into malicious hands, someone could hypothetically create an obedient (but happy) populace. Taking this one step further, if an AI gained control of the technology, then the behavior of that populace would no longer be determined by humans at all.
Dataism
While some cling to the ideals of humanism, others have turned to a more extreme version of techno-religion: Dataism. Dataism operates under the belief that the universe is connected by the flow of data and that the value of anything, human or otherwise, can be determined by its ability to process data.
According to Dataism, human experiences aren’t valuable and Homo sapiens aren’t a precursor to Homo deus. Dataists believe that the supremacy of humanity has come to an end because organic algorithms can no longer process the amount of data that flows through the universe. The future requires a more complex system that can process information more efficiently than the human mind.
To accomplish this, Dataists want to work with AI to create the “Internet-of-All-Things,” an all-encompassing data-processing system that will spread throughout the entirety of the galaxy, if not the universe. This system would become God-like, being everywhere at once and shaping the cosmos to its will. Eventually, humanity would merge with this system, giving themselves over to the all-knowing entity.
The Human Contribution
As the “Internet-of-All-Things” begins to take shape, the source of meaning and authority has started to shift from the individual to the global data-processing system. Because meaning is attached to the all-knowing system, human experiences only hold value if they contribute to that system.
According to Dataism, the only thing that makes humanity superior to other animals is its ability to share information with the system directly. Though dogs and people both contribute data, dogs can’t write a blog post or search on Google. As the internet continues to increase in size, human beings are turning into small contributors to a massive system that no one fully comprehends.
The Future of Dataism
The shift from a human-centric model to a data-centric model would take at least a few decades, if not a few centuries. Just as the humanist revolution took time to develop, elements of Dataism will begin to emerge alongside contemporary perspectives, slowly adjusting human life towards a centralized, external processing system.
Initially, Dataist movements will likely spread by appeasing humanist ideals. Humans may work towards the creation of an “Internet-of-All-Things” with the hope that it can continue to improve humanity’s quest for health, happiness, and power. However, once the omniscient entity is created, humanist projects will likely get pushed to the side, making human beings cogs in the operation of a much larger machine.
Over time, the “Internet-of-All-Things” may develop more efficient “cogs” to replace human beings, eventually deeming them irrelevant in the grand scheme of the universe. While humans may try to take credit for the creation of the “Internet-of-All-Things,” they may be eventually lost to time, ultimately seen as just a small blip in the near-infinite flow of time and data.
Want to learn the rest of Homo Deus in 21 minutes?
Unlock the full book summary of Homo Deus by signing up for Shortform .
Shortform summaries help you learn 10x faster by:
- Being 100% comprehensive: you learn the most important points in the book
- Cutting out the fluff: you don't spend your time wondering what the author's point is.
- Interactive exercises: apply the book's ideas to your own life with our educators' guidance.
Here's a preview of the rest of Shortform's Homo Deus PDF summary: