Is a Superintelligent AI Coming? Here’s What Experts Say

What’s superintelligent AI? When will superintelligent AI be introduced to the world?

In his book Life 3.0, Max Tegmark says an artificial superintelligence likely won’t be developed until 2055. While Tegmark certainly thinks a superintelligent AI is possible, other experts disagree.

Let’s look at both sides of the argument about an artificial superintelligence’s possibility.

Max Tegmark: Superintelligent AI Is Coming

Tegmark defines intelligence as the capacity to successfully achieve complex goals. Thus, an “artificial superintelligence” is a computer sophisticated enough to understand and accomplish goals far more capably than today’s humans. For example, a computer that could manage an entire factory at once—designing, manufacturing, and shipping out new products all on its own—would be an artificial superintelligence. By definition, a superintelligent computer would have the power to do things that humans currently can’t; thus, it’s likely that its invention would drastically change the world. 

Tegmark asserts that if we ever invent a superintelligent AI, it will probably occur after we’ve already created “artificial general intelligence” (AGI). This term refers to an AI that can accomplish any task with at least human-level proficiency—including the task of designing more advanced AI.

Experts disagree regarding how likely it is that computers will reach human-level general intelligence. Some dismiss it as an impossibility, while others only disagree on when it will probably occur. According to a survey Tegmark conducted at a 2015 conference, the average AI expert is 50% certain that we’ll develop AGI by 2055.

AGI is important because, in theory, a computer that can redesign itself to be more intelligent can use that new intelligence to redesign itself more quickly. AI experts call this hypothetical accelerating cycle of self-improvement an intelligence explosion. If an intelligence explosion were to happen, the most sophisticated computer on the planet could advance from mildly useful AGI to world-transforming superintelligence in a remarkably short time—perhaps just a few days or hours.

(Shortform note: A more recent and expansive survey of 738 machine learning experts conducted in 2022 confirms that Tegmark’s survey data is still close to representative of expert opinion. The average expert is 50% certain that AI will be able to beat human workers at any task by the year 2059. Fifty-four percent of experts surveyed also believe that, once AGI exists, the odds of an intelligence explosion of some kind happening are “about even” or better on a scale from “very unlikely” to “very likely.”)

Counterargument: Why Artificial Superintelligence May Be Impossible

Some experts contend that the possibility of an artificial superintelligence, able to accomplish any goal better than humans, is a myth.

They argue that Tegmark’s definition of intelligence is overly simplistic, as it falsely assumes that intelligence is a quantity you can measure with a single metric. There’s no one mental attribute or process you can use to achieve all complex goals; rather, different goals require different kinds of intelligence. For example, bloodhounds are more intelligent than humans at distinguishing and tracking specific smells, while humans are more intelligent at writing poetry. In short, there’s no such thing as general intelligence, which means a hypothetical computer with superhuman-level general intelligence is impossible.

Critics of Tegmark’s view argue that part of the reason people believe in general intelligence is that we’re limited by the human point of view. Although it seems like humans can apply their cognitive skills to solve any problem (and therefore have general intelligence), in reality, it only seems that way because we spend all our time solving problems that the human brain is readily able to solve. We only believe that computers have the potential to be much smarter than us because we can see them doing things humans can’t do, like instantly solving complex math calculations. However, they have a different kind of intelligence, not necessarily more intelligence.

By this logic, an AI simulating the human brain wouldn’t be a true “AGI”—it would just be able to solve the kinds of problems that humans can. Additionally, such an AI probably wouldn’t be able to trigger an intelligence explosion. Even if it could improve its own programming as well as a human could, there’s no reason to believe further iterations would be able to solve problems at an exponentially higher speed. This belief relies on the assumption that the ability to solve any problem is a uniform trait you can procedurally optimize—and as we’ve discussed, this may not be the case.

Is a Superintelligent AI Coming? Here’s What Experts Say

Katie Doll

Somehow, Katie was able to pull off her childhood dream of creating a career around books after graduating with a degree in English and a concentration in Creative Writing. Her preferred genre of books has changed drastically over the years, from fantasy/dystopian young-adult to moving novels and non-fiction books on the human experience. Katie especially enjoys reading and writing about all things television, good and bad.

Leave a Reply

Your email address will not be published.