Can machines truly think and feel? What happens to “you” if your brain is scanned and uploaded to a computer? Is your digital self still “you”?
These questions, which philosophers have debated for millennia, are raised again with the presentation of Ray Kurzweil’s provocative idea: consciousness might be nothing more than patterns of information. If he’s right, the implications are staggering—not just for artificial intelligence, but for what it means to be human. Keep reading to learn Kurzweil’s thoughts on continuity of identity from his books How to Create a Mind and The Singularity Is Near.
Table of Contents
Kurzweil’s Theory Raises Questions About Identity
In How to Create a Mind, inventor and futurist Ray Kurzweil suggests that understanding minds as pattern processors opens the door to creating artificial consciousness, which would challenge our traditional understanding of what it means to be conscious and human. His framework raises fundamental questions not only about the nature of consciousness, but also about the continuity of identity across time and technological transformation.
Kurzweil’s theory leads to a conclusion that challenges basic assumptions about consciousness and intelligence: If consciousness emerges from patterns of information rather than biological processes, then digital minds are real minds—not just simulations. We often assume that computers can only mimic intelligence. But, in Kurzweil’s view, a sufficiently advanced pattern recognition system wouldn’t be pretending to think—it would really be thinking. He argues that the patterns of information processing that constitute consciousness don’t depend on being implemented in biological neurons versus electronic circuits.
Consciousness & Identity
Kurzweil acknowledges that accepting this conclusion requires what he calls a “leap of faith.” There’s no definitive test for consciousness that doesn’t rely on philosophical assumptions about what consciousness actually is. However, he argues that this leap is no different from the one we make when we assume other humans are conscious based on their behavior and self-reports—we can’t directly access anyone else’s subjective experience. His position is straightforward: Once machines become convincing in their emotional reactions and claims about their subjective experiences—once they can make us laugh, move us to tears, and respond appropriately to joy and suffering—we should accept them as conscious beings.
(Shortform note: Whether you believe a digital copy of yourself would still be “you” depends greatly on your conceptions of consciousness and identity. In Flow, Mihaly Csikszentmihalyi defines consciousness as a mental state of awareness in which we perceive, process, order, and act on sensory input and information—something it’s not hard to imagine computers doing. However, in Homo Deus, Yuval Noah Harari defines consciousness as the combination of thoughts, emotions, and sensations that create your subjective experience. It’s the subjective nature of the latter definition that calls the possibility of AI consciousness into question. In Waking Up, Sam Harris points out that, because consciousness is a subjective experience, it can be studied only from the inside. In other words, science can objectively study the products of consciousness, but not consciousness itself. Therefore, if a computer claimed to be conscious, we’d simply have to decide whether to take it at its word.)
| Can AI Think, Understand, and Reason Like Humans Do? It’s unclear to what degree current AI development is even moving in the direction Kurzweil suggests. Modern AI systems have what researchers call “jagged intelligence”—they can solve math problems, write code, and hold conversations, yet fail at tasks that feel effortless to humans. Some models can engage in what sounds like reasoning, breaking down complex problems into smaller steps. But researchers question whether even these systems can really think, or reason, in the same way humans do. Large language models like OpenAI’s GPT build their “knowledge” about the world purely by mapping text patterns, rather than by understanding how things work. This contrasts sharply with human learning, which occurs through embodied experience, curiosity, and interaction with the physical and social world. Defining consciousness is difficult, but philosophers and neuroscientists tend to agree it requires subjective experience—having the experience of what it feels like to be you—and is more than just the ability to process information. Kurzweil argues that once machines can convince us that they have subjective experiences, we should accept them as conscious. Many other experts decline to take this “leap of faith”: They think consciousness may require being a living system, with hormones, emotions, and interaction between brain and body to create genuine feelings and sensations. If consciousness really depends on experiencing the world as a living organism, then AI might never achieve consciousness, no matter how convincingly a model seems to simulate human thought, reasoning, or feeling. Yet some experts say we may be defining consciousness too narrowly. Rather than having a continuous, persistent self like we experience, AI seems to have brief moments of something resembling awareness as it processes information. If consciousness doesn’t need to be permanent to be meaningful, then these temporary cognitive states might represent a different but genuine form of conscious experience. |
Identity as Information Patterns
This framework forces us to reconsider how we understand human identity. If consciousness consists of information patterns, Kurzweil argues that what makes you “you” is the specific pattern of information stored in your brain’s pattern recognition networks: the memories you’ve accumulated, the skills you’ve learned, the personality traits you’ve developed, and the ways of processing information you’ve established. Kurzweil contends that your identity isn’t tied to the particular molecules in your brain, which are completely replaced every few weeks. Instead, your identity lies in the continuity of information patterns—like how a river remains the same river despite consisting of completely different water molecules from day to day.
This has radical implications. Kurzweil contends that, if your brain were scanned and copied while you remained alive, both versions would feel like the “real” you, but they would be separate conscious entities. But, if your brain were gradually replaced with digital components over time, the way the molecules in your body are continually replaced, you would maintain continuity of identity throughout the process. The key insight is that identity is preserved through continuity of pattern, not continuity of physical substance.
Philosophical Precedents
Kurzweil’s ideas about consciousness and identity may seem radically modern, but they connect to philosophical questions that humans have grappled with for millennia. Ancient thinkers wrestled with similar puzzles about what makes something—or someone—remain the same through change. These earlier frameworks offer illuminating perspectives on whether digital copies of our minds would truly be “us,” and whether there’s even a stable “self” to copy in the first place.
| How Ancient Philosophers Thought About Continuity Kurzweil’s argument that identity lies in information patterns recalls an ancient Greek thought experiment, the Ship of Theseus, which asks whether a ship remains the same ship if all its planks are gradually replaced. Perhaps the ship becomes different when the first plank changes, or when half (or all) are replaced, or maybe it remains the same because it always retains its essential form. It’s in this second vein that Kurzweil argues your identity persists because the essential structure of who you are—your memories, skills, personality traits, and ways of thinking—remains continuous. This is like saying the ship remains Theseus’s ship since it keeps the same shape, function, and history, or that a river is always the same river. Another ancient tradition, Buddhism, takes the opposite view. Where Kurzweil sees patterns that continue through change as proof that identity persists, Buddhism sees the constant change as evidence there’s no fixed self at all, and that your sense of continuity is an illusion. Buddhists believe that what you experience as your “self” emerges from five things that are constantly changing as you interact with the world—your physical form, feelings, perceptions, mental formations, and consciousness. Since these are always in flux, there’s no stable “you.” So whether your brain were scanned and copied, or replaced with digital pieces, Buddhism might suggest both scenarios just continue the illusion of selfhood. |
(Shortform note: Whether a digital copy of your mind is still “you” may be a moot point because Sam Harris argues in Waking Up that your sense of self is merely an illusion created by your mental processes. The sense that you’re an incorporeal being sitting behind the steering wheel of your brain may simply be a figment of your brain’s functions. In Harris’s view, a digital “you” would not be “you” at all because there’s no “you” to begin with— you’re just a continuity of conscious awareness.)
The Path to a Digital Self
In The Singularity Is Near, Kurzweil covers advances in brain research, how they apply to computation models, and how, if computers can simulate brains, you may one day be able to upload your whole mind into the digital world.
Current Brain Research and Modeling
Historically, the medical tools we’ve used to analyze and understand the brain were crude. But, like all other modern technology, they’re improving at an accelerated pace. It’s now possible to image a functioning brain down to the level of individual neurons. Kurzweil says that computer models of the brain are likewise improving at a phenomenal rate. While the brain is extremely complex with trillions of neural connections, there is a lot of built-in redundancy. An effective computer model of a brain doesn’t have to simulate every neuron firing, and we’ve already made remarkable progress in modeling some of the brain’s specific regions.
(Shortform note: Kurzweil’s hope for a fully functional simulation of the human brain was attempted by the Human Brain Project, which ran from 2013 to 2023. It fell short of its goal of a digital model of the entire brain, but it was able to model over 200 brain regions and made discoveries that are used to treat neurological disorders and injuries. Another byproduct of the Human Brain Project is EBRAINS, an open digital research network devoted to furthering neuroscience and brain studies using the latest computer tools and data.)
Technical Challenges and Solutions
Kurzweil admits that the brain’s major advantage over digital computers is that it is massively parallel—it sets countless neural pathways to solving any problem all at the same time, as opposed to the more linear approach taken by traditional computing. This more than makes up for neurons’ relatively slow chemical transmission of data. However, the hardware for fast parallel processing is rapidly becoming available for digital computers. Another advantage of the human brain is that via neuroplasticity, it can rearrange its connections and adapt, something that physical computers cannot do. Nevertheless, Kurzweil insists that the brain’s ability to adapt and reorder itself can be addressed in the realm of software if not hardware.
| Simulating the Brain Kurzweil’s prediction that future computers would copy human brain functions has held true, at least in the field of computer research. Engineers are now designing computers with spiking neural networks (SNNs), which mimic how neurons interact rather than relying on traditional computer architectures. Meanwhile, parallel processing analogous to the brain has allowed machine learning and “big data” analysis to advance by leaps and bounds. Recent work has resulted in the development of Neural Processing Units, a new type of computer processor that allows SNNs to be built at larger scales. The plasticity of brain cells is harder to reproduce, but researchers have developed synaptic transistors that mimic neurons’ ability to change and adapt. While much of how the brain works is still unknown, scientists hope that computer hardware that functions like neurons will unlock further progress in brain research. |
Mapping the Brain
Kurzweil cautions us to remember that the brain isn’t perfect—it evolved to function just well enough for our primitive ancestors to survive. Once we can digitally replicate the brain, we’ll also be able to improve its design, and once our computing power is great enough, Kurzweil believes that it will become possible to scan and upload the memories and specific neural connections of a person’s mind into a digital self. Though this may sound like pure science fiction, the level of computing necessary should be readily available in the 2030s, so creating a digital backup of yourself will only be a question of software and the state of brain-scanning technology.
(Shortform note: Because making a digital backup of your mind offers potential immortality, Russian entrepreneur Dmitry Itskov founded the 2045 Initiative to fund research on digital mind uploads and robot avatars to replace human bodies. Neuroscientist Ken Hayworth of the Brain Preservation Foundation agrees that uploading consciousness should be possible, if beyond the reach of current technology. In 2016, Hayworth predicted that it would take two years to map the brain of a fly, much less a more complicated organism. However, Kurzweil’s theory of exponential growth may already have supporting evidence because, in 2023, researchers mapped the entire brain of a mouse at a resolution 1,000 times greater than a normal MRI.)
Continuity of Identity Is Becoming More Important
Whether or not your digital self is still “you” will pose both philosophical and legal conundrums. Our entire legal system revolves around the rights of living, conscious beings. So the matter of whether a digital being can be conscious will become much more than a hypothetical issue. However, Kurzweil suggests that, as we work through the legal ramifications, our transition from biological to digital entities won’t be abrupt. Instead, it will be a slow process as we gradually augment our physical brains with more and more digital capabilities, until the center of our consciousness gradually slides from the physical world into the electronic realm.
To dig deeper into the concept of continuity of identity in its broader context, read Shortform’s guides to the books that these ideas come from: