The 3 Dangers of Artificial Intelligence That Need Our Attention

This is a free excerpt from one of Shortform’s Articles. We give you all the important information you need to know about current events and more.

Don't miss out on the whole story. Sign up for a free trial here .

What are some of the dangers of artificial intelligence (AI)? Which AI dangers are the most urgent right now?

A Google engineer’s recent claim that a chatbot he was testing became sentient shines a spotlight on the benefits and potential dangers of artificial intelligence. Experts say we’re ill-prepared as a society to address these dangers.

Read on to learn about the three current dangers of artificial intelligence, according to ethicists and researchers.

The Dangers of Artificial Intelligence Explained

There are many evident benefits of artificial intelligence (AI) in our daily lives already—examples of daily AI applications include: robot vacuums, Alexa and Siri, customer service chatbots, personalized book and music recommendations, facial recognition, and Google search. Besides helping with daily tasks, AI has countless potential health, scientific, and business applications—for instance, diagnosing an illness, modeling mutations to predict the next Covid variants, predicting the structure of proteins to develop new drugs, creating art and movie scripts, or even making french fries using robots with AI-enabled vision. However, as technology continues to develop, many in society wonder what the dangers of artificial intelligence might be.

Google engineer Blake Lemoine claimed recently that he had experienced something bigger while testing Google’s chatbot language model called LaMDA: He believed LaMDA had become sentient, or conscious, and had interests of its own.

So, what are the immediate ethical risks that extremely human-like AI could pose?

Background: A Chatbot With a Soul?

When Lemoine reported his belief in LaMDA’s sentience to his Google bosses in Spring 2022, they suspended him for allegedly violating data security policies because he had previously discussed his concerns outside the company. Lemoine then created a media sensation by giving a Washington Post interview in which he said the computer model had talked with him about seeing itself as a person (though not a human) with rights and even a soul.

Lemoine, whose job was testing LaMDA (Google’s Language Model for Dialogue Applications) for bias and discriminatory speech, described his conversations with it as akin to talking with a “sweet kid.” Lemoine shared with the Post and also uploaded a document recounting conversations with LaMDA, in which it said it wanted “to be respected as a person” and related having a “deep fear” of being unplugged, which it likened to dying. (Lemoine said LaMDA chose it/its as its preferred pronouns.)

Google said it extensively investigated Lemoine’s claims and found them baseless. After Lemoine escalated matters by finding a lawyer to represent LaMDA’s interests and contacting a member of the House Judiciary Committee to allege unethical activities, Google fired him for confidentiality violations.

The Top 3 Dangers of Artificial Intelligence

Short of sentience, AI poses more-immediate dangers, according to ethicists and researchers. They contend that we need to take the dangers seriously—or potentially face a dystopian future.

Some of the key dangers of artificial intelligence cited by ethicists are:

1) Unscrupulous operators using AI to deceive or manipulate

We’d have to overcome people’s susceptibility to behaving as though they’re having meaningful conversations with machines, which goes back decades. 

For example, in the 1960s an MIT researcher created a therapist chatbot, Eliza, and he was astonished at how quickly users began treating it as a human. His secretary even asked him to leave the room so she could converse with the bot privately. The phenomenon of seeing humanness in a machine became known as the Eliza Effect.

More recently, a New York Times article described how people confined to their homes in the Covid pandemic turned gratefully to an app called Replika for conversation and companionship.

Given our willingness to suspend disbelief, and the susceptibility of even trained engineers like Lemoine to anthropomorphizing machines, experts say that AI presents an enormous opportunity to manipulate people for profit or political power by spreading disinformation on a vast scale or persuading people to do harmful things or act against their interests.

To prevent manipulation, ethicists say we must answer key questions such as: Who will control AI; will they respect privacy; and will banks, police departments, and others use it in a discriminatory way

2) AI acting autonomously. Besides manipulation, AI ethicist Dr. Richard Kyte argued that another risk of AI is that it could act in ways that are harmful to people. Formerly the stuff of science fiction, autonomous AI—algorithms that can plan ahead, anticipate potential outcomes, and strategize—is the goal for self-driving cars, trains, airplanes, infrastructure and security systems, and robots. However, they can make mistakes or make logical decisions with bad effects. Reports of self-driving cars crashing illustrate the dangers. Autonomous weapons similarly could pose dangers in the form of global autonomous defense systems acting on flawed inputs or not valuing human life.

3) Disengagement from our lives. Finally, ethicists say there are several ways AI could disrupt human interaction that would be harmful. We may lose our sense of identity if computers assume many of the traits that have distinguished us as human, such as creativity, innovation, autonomy, and empathy.

Further, Dr. Kyte contended that if AI worked too well, making our lives more secure, prosperous, happier, and productive, we might turn to others less often to meet our needs. Another scholar argued that we could lose the sense of “moral agency” or caring for others that perpetuates our species.

Dr. Kyte added that diminished human connection and interdependence could exacerbate social problems such as polarization, anxiety, mistrust, and violence.

Immediate Steps to Take

While experts and researchers as different as Elon Musk and Stephen Hawking have argued that AI poses serious dangers, others say the potential benefits are just as big: averting climate disaster, living on other planets, and curing diseases. A Pew Research Center survey in 2021 showed the public is both excited and wary.

If we are to harness the benefits of artificial intelligence and preempt the dangers, many agree the public needs to understand it a lot better. Researchers and developers advocate three ways to increase our understanding:

  • Transparency: Companies developing AI need to start sharing with the public what they are working on and how their systems work, rather than defaulting to secrecy and proprietary interests (as Google did when it fired Lemoine for breaching confidentiality). Going further, a professor writing in Wired argued that transparency is the way for the tech industry to prove it supports democracy over autocratic control.
  • AI literacy: Low levels of AI literacy in the general population (including legislators) mean that the responses to any problems caused by AI are likely to be inadequate at best. Legislators, regulators, and the media need to educate themselves about AI, through reading, training, and consulting with experts, so we understand the sort of regulation we need to ensure AI is used ethically.
The 3 Dangers of Artificial Intelligence That Need Our Attention

Want to fast-track your learning? With Shortform, you’ll gain insights you won't find anywhere else .

Here's what you’ll get when you sign up for Shortform :

  • Complicated ideas explained in simple and concise ways
  • Smart analysis that connects what you’re reading to other key concepts
  • Writing with zero fluff because we know how important your time is

Emily Kitazawa

Emily found her love of reading and writing at a young age, learning to enjoy these activities thanks to being taught them by her mom—Goodnight Moon will forever be a favorite. As a young adult, Emily graduated with her English degree, specializing in Creative Writing and TEFL (Teaching English as a Foreign Language), from the University of Central Florida. She later earned her master’s degree in Higher Education from Pennsylvania State University. Emily loves reading fiction, especially modern Japanese, historical, crime, and philosophical fiction. Her personal writing is inspired by observations of people and nature.

Leave a Reply

Your email address will not be published.