AI & Decision-Making: From Free Will to Algorithms

This article is an excerpt from the Shortform book guide to "21 Lessons for the 21st Century" by Yuval Noah Harari. Shortform has the world's best summaries and analyses of books you should be reading.

Like this article? Sign up for a free trial here .

How is AI changing decision-making? What decisions are already made with the help of AI technology? Can reliance on AI for decision-making cause us to lose our freedom and abilities to make our own choices?

Although in its infancy, AI is already changing the way we make decisions. For example, institutions already use algorithms to decide which loan applicants to approve or deny. Going forward, AI decision-making capabilities can easily allow us to make personal decisions, such as where to go to college, which career to pursue, and who to marry.

Keep reading for more about AI and decision-making.

AI & Decision-Making: Shifting From Free Will to Technology

As algorithms provide increasingly accurate suggestions, their convenience is almost irresistible—but over-relying on AI for decision-making can cause us to lose the freedom and ability to make our own choices.

Relegating decision-making to AI is another way that technology is undermining liberalism, which is all about freedom and personal liberties—to vote, to buy goods in a free market, and to pursue individual dreams and goals with the protection of human rights. 

Before the advent of liberalism, societies were guided by mystical, divine messages from the gods. In the last few centuries, the authority shifted from gods to free will. Although free will feels free, it’s actually a biochemical response honed by evolution and designed to help you survive and thrive. For example, when you see a snake, your reaction to run away is merely an evolutionary response to keep you safe. Similarly, when you feel bad after having an argument with a friend, your desire to make amends is not purely emotional, but rather a function of your biological wiring to cooperate within a community. 

This biochemical process meant to promote your safety and well-being—which we call free will— has historically been a perfectly valid method of making decisions and running democracies. However, science has developed technology that can not only replicate that process but also perform it better than you can. As people shift authority from free will to computer algorithms, liberalism becomes increasingly obsolete. 

People have already delegated some tasks to algorithms: You let Netflix suggest your next movie, and Google maps tells you when and where to turn. Each decision that algorithms make for you has two effects: 

  1. Your trust in the algorithm increases. When Netflix suggests a movie, and you end up loving it, that experience reinforces your reliance on Netflix’s recommendation. Similarly, if Google maps tells you to turn left, but you turn right and get stuck in traffic, next time you’ll follow Google’s direction—and when you do, and you arrive at your destination without traffic, you’ll not only gain trust in Google maps, but you’ll lose trust in your own abilities. 
  2. The algorithm learns more about your preferences, which enables it to make even better decisions for you in the future. As the algorithm gains more knowledge about you, it will make better choices for you, which will reinforce your increasing trust in its decision-making and decreasing trust in your own.

The algorithms won’t be perfect, and they won’t make the best decision every time—but they don’t have to. As long as algorithms make better choices on average than humans do, they’ll still be considered a better alternative. Additionally, if people wear biometric sensors on or inside their bodies, those sensors can monitor heart rate, blood pressure, and other indicators of your preferences, opinions, and emotions. Using this data, the computer can make even more well-informed decisions for you. 

The reliance on algorithms can easily snowball to more and bigger decisions, such as where to go to college, which career to pursue, and who to marry. An algorithm that uses your biometric data can learn what makes you laugh, what makes you cringe, and what makes you cry. This algorithm could use that data to find a compatible partner for you to marry, and it would probably make a better choice than you would with your free will, since your decision might be influenced by a past breakup or be otherwise biased in some way. 

If computers make all of your big decisions, your life would probably be much smoother without dealing with the stress of decision-making or the consequences of poor choices. But what would that life be like? So much of the drama and action in day-to-day life revolves around decision-making—from deciding whether to take on a project at work to figuring out where to relocate your family. The value humans place on decision-making is reflected in various institutions. For example: 

  • Liberalism is all about the individual’s choice to vote as she wants and buy what she wants. 
  • Religions emphasize making the right choices and resisting the wrong ones.
  • The arts reflect the prominence of decisions in human lives, as in Hamlet’s famous dilemma, “to be or not to be.”

When humans rely on algorithms to make every choice for them—essentially molding the path of their lives—what will humans’ role be, besides providing biometric data to be used in the AI decision-making process and then carrying out the verdict?

AI and Ethical Decision-Making

Some of the most difficult and nuanced decisions people have to make are about ethical dilemmas. If they’re programmed to do so, algorithms could even handle ethical decisions—but the capability would come with pros and cons. 

On the positive side, algorithms would make the ethical choice every time. The computer wouldn’t be swayed by selfish motives, emotions, or subconscious biases, as humans are. Regardless of how resolute a person may be about ethics, in a stressful or chaotic situation, emotion and primitive instincts kick in and can override philosophical ethics. Additionally, a hiring manager can insist that racial and gender discrimination are wrong—but her subconscious biases may still prevent her from hiring a black female job applicant. 

On the negative side, delegating decisions to machines that follow absolute ethics raises the question: Who decides which philosophy is programmed into the software? Imagine a self-driving car cruising down a road when children run into the street in front of it. In a split second, the car’s algorithm determines that there are two choices: 

  1. Run into the children. The computer would opt for this choice if it’s programmed to follow an egoistic philosophy, which saves the car owner at all expenses.
  2. Swerve into oncoming traffic, which would almost certainly kill the owner of the car, who’s in the passenger seat. The computer would opt for this choice if it’s programmed to follow an altruistic philosophy, which saves others at the expense of the car owner. 

Alternatively, the self-driving car manufacturer could offer two models of the car, each of which follows a different philosophy. If consumers have to choose which model to buy, how many will choose the car that sacrifices them? Although many people might agree that the car should spare the children in a hypothetical situation, few would actually volunteer to sacrifice themselves in order to follow ethics (this brings us back to the point above, that humans often don’t follow ethics in real-life situations). 

Another possibility is that the government mandates how the cars are programmed. On one hand, this gives the government the power to pass laws that are guaranteed to be followed to a tee, since the computers won’t deviate from their programming. On the other hand, this practically amounts to totalitarian power, because lawmakers are determining the actions of computers that are entrusted with making decisions for people. 

Creating New Forms of Discrimination

The potential dangers of AI are scary, but some of them are already a reality. Corporations, banks, and other institutions already use algorithms to make decisions, such as which loan applicants to approve or deny. On the positive side, an algorithm can’t racially discriminate against an applicant (unless it’s programmed to do so). On the negative side, the algorithm may discriminate against you based on individual characteristics—it could be something in your DNA, or your social media account. With algorithms in charge, you’re more likely to face discrimination based on who you are, rather than to which group you belong

This shift brings two consequences: 

  1. It will be difficult to recognize—and remedy—this form of discrimination. If a loan applicant asks why she was denied, the banker won’t know. The people using the algorithm are unlikely to understand the computer’s process or know its reasoning, they’ll simply trust that it’s right. 
  2. It will be difficult to fight this form of discrimination. If you face prejudice based on your race, gender, ability, or some other common characteristic, you can organize a protest with other people who share that characteristic. However, if you face prejudice based on your individual characteristics, there is no collective group experiencing this discrimination, so you’ll be fighting it alone.

Preventing AI Mind Control

Although AI could develop to the point that programmers could wire it with consciousness—which would essentially give computers a mind of their own, as science fiction thrillers forewarn—the possibility is remote. The larger danger is that humans put so much effort into developing AI that they neglect to develop their own consciousness and ability to discern. If people come to rely on AI for decision-making—and distrust their own instincts and capabilities in the process—then they become easy victims for manipulation.  

In order to avoid falling victim to total mind control by AI, humans must devote more time and energy to researching and developing human consciousness. Furthermore, this commitment must be prioritized above immediate economic and political benefits. For example, many people managers expect their employees to respond promptly to emails, even after hours. That expectation causes employees to compulsively check and answer emails, even at the expense of their experiences and sensations—during dinner, they may be so consumed in their email that they don’t even notice the taste and texture of their food. If humans follow this road, they will become cogs in a machine run by robots, and they’ll lose the ability to live up to their potential as individuals. 

AI & Decision-Making: From Free Will to Algorithms

———End of Preview———

Like what you just read? Read the rest of the world's best book summary and analysis of Yuval Noah Harari's "21 Lessons for the 21st Century" at Shortform .

Here's what you'll find in our full 21 Lessons for the 21st Century summary :

  • What the unique challenges of the 21st century are and will be
  • Why religion can't solve these 21st-century challenges
  • How algorithms like Netflix recommendations are teaching you not to trust yourself

Darya Sinusoid

Darya’s love for reading started with fantasy novels (The LOTR trilogy is still her all-time-favorite). Growing up, however, she found herself transitioning to non-fiction, psychological, and self-help books. She has a degree in Psychology and a deep passion for the subject. She likes reading research-informed books that distill the workings of the human brain/mind/consciousness and thinking of ways to apply the insights to her own life. Some of her favorites include Thinking, Fast and Slow, How We Decide, and The Wisdom of the Enneagram.

Leave a Reply

Your email address will not be published.