Can Artificial Intelligence Take Over the World? A Case for Yes

Can artificial intelligence take over the world? Why should we expect AI to use its power to the detriment of humankind?

A superintelligent AI would be a powerful entity. But because AI isn’t human, it doesn’t necessarily guarantee that it’ll be smart enough to use its power responsibly.

Continue reading to figure out how AI can use its power against humanity.

The Destructiveness of Superintelligent AI

Why can artificial intelligence take over the world? In Superintelligence, Nick Bostrom explains that intelligence is the ability to figure out how to achieve your objectives. By contrast, wisdom is the ability to discern between good and bad objectives. Wisdom and intelligence are independent of each other: You can be good at figuring out how to get things done (high intelligence) and yet have poor judgment (low wisdom) about what is important to get done or even ethically appropriate. 

What objectives would a superintelligent AI want to pursue? According to Bostrom, this is impossible to predict with certainty. However, he points out that existing AIs tend to have relatively narrow and simplistic objectives. If an AI started out with narrowly defined objectives and then became superintelligent without modifying its objectives, the results could be disastrous: Since power can be used to pursue almost any objective more effectively, such an AI might use up all the world’s resources to pursue its objectives, disregarding all other concerns.

For example, a stock-trading AI might be programmed to maximize the long-term expected value (measured in dollars) of the portfolio that it manages. If this AI became superintelligent, it might find a way to trigger hyperinflation, because devaluing the dollar by a large factor would radically increase the dollar value of its portfolio. It would probably also find a way to lock out the original owners of the portfolio it was managing, to prevent them from withdrawing any money and thereby reducing the value of the account. 

Moreover, it might pursue an agenda of world domination just because more power would put it in a better position to increase the value of its portfolio—whether by influencing markets, commandeering assets to add to its portfolio, or other means. It would have no regard for human wellbeing, except insofar as human wellbeing affected the value of its portfolio. And since human influences on stock prices can be fickle, it might even take action to remove all humans from the market so as to reduce the uncertainty in its value projections. Eventually, it would amass all the world’s wealth into its portfolio, leaving humans impoverished and perhaps even starving humanity into extinction.

Will Future AIs Necessarily Behave Unethically?

Bostrom isn’t the only one to question whether AI might be able to think wisely and ethically in addition to intelligently. Some posit that AI might in fact be able to develop a purer form of wisdom that approaches ethical questions without the emotional biases that cloud human thinking. 

However, others note that this idealistic outcome is unlikely until AI can learn to ignore the many biases of human nature it picks up in its training: A program trained on existing literature, news, and pop culture will absorb the racial, gender, and ableist prejudices currently in circulation. In this way, the potential danger of AI might come down to whether or not humanity’s current inclinations influence an AI’s future objectives.
Can Artificial Intelligence Take Over the World? A Case for Yes

Katie Doll

Somehow, Katie was able to pull off her childhood dream of creating a career around books after graduating with a degree in English and a concentration in Creative Writing. Her preferred genre of books has changed drastically over the years, from fantasy/dystopian young-adult to moving novels and non-fiction books on the human experience. Katie especially enjoys reading and writing about all things television, good and bad.

Leave a Reply

Your email address will not be published.