What is the future of AI? Why does Max Tegmark believe artificial superintelligence is possible?
The idea that we could build a computer that’s smarter than us may seem far-fetched. Yet, some evidence indicates that such technology is on its way, according to Max Tegmark’s book Life 3.0.
Keep reading to learn what AI could possibly turn into.
Evidence That Artificial Superintelligence Is Possible
What is the future of AI and what evidence do we have that it’s evolving? First, the artificial intelligence we’ve created so far is functioning more and more like AGI. Researchers have developed a new way to design sophisticated AI called deep reinforcement learning. Essentially, they’ve created computers that can repeatedly modify themselves in an effort to accomplish a certain goal. This means that machines can now use the same process to learn different skills—a necessary component of AGI. Currently, there are many human skills that developers can’t teach to computers using deep reinforcement learning, but that list is becoming shorter.
(Shortform note: By using a specific form of deep reinforcement learning called Reinforcement Learning from Human Feedback (RLHF), AI developers can get computers to optimize for vaguely defined or subjective values. For instance, the developers at OpenAI used this process to train the AI-based chatbot ChatGPT: Humans answered sample prompts to show ChatGPT what kinds of answers they wanted. Then, they allowed the chatbot to try answering prompts and scored its answers based on how accurate and desirable they were. The AI then used this “reward” data to create a model describing what qualifies as a “good” answer, allowing it to further train itself.)
Second, Tegmark asserts that given everything we know about the universe, there’s no obvious reason to believe that artificial superintelligence is impossible. Although it may seem like our brains possess unique creative powers, they store and process information in much the same way that computers do. The information in our heads is a biological pattern rather than a digital one, but the information itself is the same no matter what material it’s encoded with. In theory, computers can do everything our brains can do.
(Shortform note: Some experts disagree, arguing that although it’s theoretically possible for hardware components to run the same information processes as a human brain, it’s not necessarily true that it could do so at the same speed. The organic material of the human brain is far faster than computers at processing tons of data—we only assume otherwise because the brain processes data unconsciously. If a computer could process information like a brain, it would likely be so slow that it would defeat the purpose of simulating a brain in the first place. By this logic, it might be impossible to create a human-level AI that can improve itself faster than a human could, placing the intelligence explosion out of our grasp.)
Tegmark concedes that AGI and artificial superintelligence might still be a pipe dream, impossible to create for some reason we don’t yet see. However, he contends that if there’s even a small chance that an artificial superintelligence will exist in the near future, it’s our responsibility to do anything in our power to ensure that it has a positive impact on humanity. This is because artificial superintelligence has the power to completely transform the world—or end it.
(Shortform note: Some experts contend that because we don’t know if a world-ending artificial superintelligence is possible or how likely it is to happen, we need to pause all ongoing AI development until we understand the science enough to proceed safely. Due to current economic incentives to create profitable AI-based products, research laboratories are advancing AI as fast as they can. As a result, they’re arguably racing to see who can trigger an intelligence explosion first. A development pause (if possible) would require significant cooperation between research firms as well as governments around the world.)