The 3 Ethical Concerns of AI That Will Make Life Harder

What are the top ethical concerns of AI? How will AI make life challenging?

According to Life 3.0 by Max Tegmark, it’s likely that rapid AI advancements will create numerous challenges that we as a society need to manage. People should mostly be concerned with economic inequality, outdated laws, and AI-enhanced weaponry.

Keep reading to learn about the ethical concerns of AI.

Concern #1: Economic Inequality

The first ethical concern of AI Tegmark argues is that AI threatens to increase economic inequality. Generally, as researchers develop the technology to automate more types of labor, companies gain the ability to serve their customers while hiring fewer employees. The owners of these companies can then keep more profits for themselves while the working class suffers from fewer job opportunities and less demand for their skills. For example, in the past, the invention of the photocopier allowed companies to avoid paying typists to duplicate documents manually, saving the company owners money at the typists’ expense.

As AI becomes more intelligent and able to automate more kinds of human labor at lower cost, this asymmetrical distribution of wealth could increase.

(Shortform note: Some experts contend that new AI-enhanced technology doesn’t have to lead to automation and inequality. If AI developers create technology that expands what one worker can do, rather than just simulating their work, that technology could create new jobs and update old ones while creating value for companies. These experts implore AI developers to consider the impact of their inventions on the labor market and adjust their plans accordingly, just as they would consider any other ethical or safety concern.)

Concern #2: Outdated Laws

Second, Tegmark contends that our legal system could become outdated and counterproductive in the face of sudden technological shifts. For example, imagine a company releases thousands of AI-assisted self-driving cars that save thousands of lives by being (on average) safer drivers than humans. However, these self-driving cars still get into some fatal accidents that wouldn’t have occurred if the passengers were driving themselves. Who, if anyone, should be held liable for these fatalities? Our legal system needs to be ready to adapt to these kinds of situations to ensure just outcomes while technology evolves.

(Shortform note: Although Tegmark contends that the legal system will struggle to keep up with AI-driven changes, other experts note that advancements in AI will drastically increase the productivity and efficiency of legal professionals. This could potentially help our legal system adapt more quickly and mitigate the damage caused by rapid change. For instance, in a self-driving car liability case, an AI language model could quickly digest and summarize all the relevant documents from similar cases from the past (for instance, a hotel-cleaning robot that injured a guest), instantly collecting the context necessary for legislators to make well-informed decisions.)

Concern #3: AI-Enhanced Weaponry

Third, AI advancements could drastically increase the killing potential of automated weapons systems, argues Tegmark. AI-directed drones would have the ability to identify and attack specific people—or groups of people—without human guidance. This could allow governments, terrorist organizations, or lone actors to commit assassinations, mass killings, or even ethnic cleansing at low cost and minimal effort. If one military power develops AI-enhanced weaponry, other powers will likely do the same, creating a new technological arms race that could endanger countless people around the world.

(Shortform note: In 2017, the Future of Life Institute (Tegmark’s nonprofit organization) produced an eight-minute film dramatizing the potential dangers of this type of AI-enhanced weaponry. After this video went viral, some experts dismissed its vision of AI-directed drones as scaremongering, arguing that even if multiple military powers developed automated drones, such weapons wouldn’t be easily reconfigured to target civilians. However, in a rebuttal article, Tegmark and his colleagues pointed to an existing microdrone called the “Switchblade” that can be used to target civilians.)

The 3 Ethical Concerns of AI That Will Make Life Harder

Katie Doll

Somehow, Katie was able to pull off her childhood dream of creating a career around books after graduating with a degree in English and a concentration in Creative Writing. Her preferred genre of books has changed drastically over the years, from fantasy/dystopian young-adult to moving novels and non-fiction books on the human experience. Katie especially enjoys reading and writing about all things television, good and bad.

Leave a Reply

Your email address will not be published.