Stephen Hawking: AI Might Become a Threat to Us

This article is an excerpt from the Shortform book guide to "Brief Answers to the Big Questions" by Stephen Hawking. Shortform has the world's best summaries and analyses of books you should be reading.

Like this article? Sign up for a free trial here .

Why does Stephen Hawking consider artificial intelligence to be a form of life? How can humanity avoid a disaster like the one in I, Robot?

According to Stephen Hawking, AI fulfills the two requirements to be considered living. Hawking warns that artificial intelligence may even become conscious one day, and if it does, it better be on the side of humanity rather than against it.

Here’s more of what Hawking had to say about AI.

Evolution Through Artificial Intelligence

In his book Brief Answers to the Big Questions, Stephen Hawking said that, in addition to accelerating the evolution of our own species, we may create an entirely new form of life: electronic life. By Hawking’s definition, “life” requires only two capabilities: reproduction and metabolism. Therefore, according to Stephen Hawking, AI is the next form of life.

(Shortform note: Hawking defines “life” more broadly than most biology textbooks do. One common definition requires something to meet five criteria in order to be considered a living thing: 1) It must be made of cells. 2) It must have DNA. 3) It must maintain homeostasis [regulate its internal chemistry]. 4) It must be able to reproduce. 5) It must be able to collect energy that it can use for growth and movement [metabolism]. Hawking dismisses the first three of these criteria and keeps only the last two. Thus, by the classical textbook definition, viruses are not alive, since they aren’t made of cells. But by Hawking’s definition, viruses are alive.)

Hawking points out that just because some technology is “alive” doesn’t mean it’s self-aware. That said, he also suggests that self-aware computers and robots may not be relegated to science fiction forever. At some point in the evolution of humans, the human brain developed enough information-processing capability to become self-aware. So, as we continue to develop increasingly sophisticated computers and artificial intelligence (AI) algorithms, it is possible that they may become conscious at some point.

Moreover, according to Hawking, once AI algorithms become better at writing algorithms than human programmers are, they will begin to evolve at an exponentially increasing rate. From this point on, AI will be increasingly beyond our control, whether they become self-aware or not. So we need to make sure that any artificial intelligence we create will work for our good and not our harm. 

In particular, Hawking warns that the societal consequences of developing autonomous weapon systems could be disastrous. But not all issues involving AI are this clear-cut. Artificial intelligence algorithms could provide powerful tools for solving important technical problems, such as finding cures to disease, or modeling and controlling climate change.

Once again this makes scientific literacy increasingly important for our society, as society is increasingly confronted with questions about how to manage the development of artificial intelligence, which are both ethical and technical in nature.

Is Weaponized Artificial Intelligence Inevitable?

Hawking advises against developing autonomous war machines or otherwise weaponizing artificial intelligence. However, in light of his discussion of genetic engineering, development of intelligent weapons may be unavoidable.

As we have discussed, he predicts that laws against human genetic engineering will not prevent it from happening, because someone will eventually try it anyway. Then the genetically engineered superhumans will have an advantage over regular humans that will force the rest of humanity to either embrace genetic engineering or lose their influence.

A similar argument could be advanced for intelligent weapons. If artificial intelligence has the potential to make weapon systems dramatically more effective, then the first military power to develop these systems will have the advantage. They’ll get to make the rules, at least until their rivals develop comparable technology. 

So can we really avoid the development of intelligent weapon systems? As much as it might be in everyone’s best interests in the long term, the opportunity these systems provide for an individual nation or faction to gain power in the short term makes it likely that someone will do it anyway. And then other nations will either have to accept their military supremacy or match their technology.
Artificial Intelligence and Dataism

Just as Harari’s discussion of techno-humanism sheds additional light on Hawking’s discussion of genetic engineering, Harari’s exposition of Dataism provides additional perspective on Hawking’s discussion of the social implications of AI.

Harari explains that Dataism is a type of “techno-religion”: a belief system based around technology. Those who believe in Dataism hold that the intrinsic value of any living thing or system is directly proportional to its data processing capacity. Unlike humanists, Dataists don’t attribute any special significance to self-awareness or other, similar qualities that set humans apart from machines and lower animals. 

Dataists believe that humans became the dominant species on Earth simply because humans can collect and process more data than animals can. Inventions like written language increased humanity’s importance because they improved humans’ ability to transmit and process data. By the same token, large communities are more important than small ones because more people can process more data. And free societies are more likely to flourish than dictatorships because they facilitate the free flow of information and allow processing to be distributed among a larger number of people.

Dataists also believe that humanity’s tenure as the dominant species on Earth is rapidly coming to an end. Computers are becoming progressively more powerful and humans are struggling to process all the data that is now available. Hawking predicts that AI will eventually become better at writing AI algorithms than human programmers, and then artificial intelligence will take off exponentially. Dataists make the same prediction and believe that will be the turning point. Once AI takes off, it will become the dominant life form on earth, as its data processing capabilities rapidly exceed humans’ capabilities. After that, Dataists believe humans will merely serve the all-powerful AI until the AI finds a way to assimilate any remaining humans into itself. 

Dataism resonates with Hawking’s conjectures that self-awareness is probably just a matter of being able to process enough data, and that artificial intelligence may evolve beyond human control, whether it becomes self-aware or not. However, where Hawking raises these points as possibilities that society needs to consider and plan for, Dataists accept them as doctrine. While Hawking warns that we need to make sure AI serves our best interests, Dataists anticipate that we’ll soon be obliged to serve a global AI—and they think that’s a good thing.

Whether or not Dataists’ predictions come true, their techno-religious beliefs illustrate how technology now influences philosophical thinking, and how technological issues like AI are becoming social issues that affect everyone. 
Stephen Hawking: AI Might Become a Threat to Us

———End of Preview———

Like what you just read? Read the rest of the world's best book summary and analysis of Stephen Hawking's "Brief Answers to the Big Questions" at Shortform .

Here's what you'll find in our full Brief Answers to the Big Questions summary :

  • The final lessons from Stephen Hawking, published after he passed
  • Hawking's thoughts about God and religion
  • Why humans should be colonizing the Moon

Hannah Aster

Hannah graduated summa cum laude with a degree in English and double minors in Professional Writing and Creative Writing. She grew up reading books like Harry Potter and His Dark Materials and has always carried a passion for fiction. However, Hannah transitioned to non-fiction writing when she started her travel website in 2018 and now enjoys sharing travel guides and trying to inspire others to see the world.

Leave a Reply

Your email address will not be published.