Quantum mechanics reveals that the universe operates on probability rather than certainty. One of its most famous ideas is Heisenberg’s uncertainty principle, which explains why we can never measure both the position and speed of a particle with perfect accuracy.
This limitation isn’t about faulty instruments or poor technique. It’s built into the fabric of reality itself. The principle emerged from early discoveries in quantum physics and has profound implications for what we can know about the world. To explain how Heisenberg’s uncertainty principle works and what it means, we’ve brought together ideas from theoretical physicist and cosmologist Stephen Hawking and astrophysicist Adam Becker.
Image credit: Bundesarchiv via Wikimedia Commons (License). Image cropped.
Table of Contents
Werner Heisenberg’s Uncertainty Principle Explained
In his book A Brief History of Time, Hawking explains that Heisenberg’s uncertainty principle states there is always at least a certain amount of uncertainty in your measurement of the position and velocity of a particle. This is important because, to predict where a particle will go (or is most likely to go) in the future, you need to know where it was and which way it was going at some point in the past or present. Uncertainty about the present creates greater uncertainty about the future.
(Shortform note: Hawking explains how uncertainty can limit the accuracy of your predictions in physics, but this general concept is applicable to other areas as well, especially in fields like the social sciences where outcomes are harder to measure or quantify. In his book Superforecasting, Philip Tetlock discusses the importance of measurement in predicting the future. In particular, he points out that many political and economic forecasters’ predictions are never actually checked against measurements after the fact. This makes it difficult to assess the credibility of the forecaster or the accuracy of their methods.)
To understand how the uncertainty principle works, it’s important to know a few things about quantum mechanics. For one thing, as Hawking notes, a basic premise of quantum mechanics is that certain quantities like energy and frequency have to be incremented by at least a certain minimum value. (Shortform note: This minimum unit is called a “quantum” of energy, which is where “quantum mechanics” gets its name.)
To explain this phenomenon, it’s helpful to consider how quantum mechanics was discovered, so let’s discuss its origins. Then, we’ll show how quantum mechanics gives rise to the uncertainty principle.
The Origins of Quantum Mechanics
Hawking recounts that, circa 1900, scientists realized that their theories of radiant heat transfer predicted that any hot object should radiate an infinite amount of energy, which was obviously not the case. The reason was that in these theories, radiation could have any frequency, and objects were thought to give off radiation uniformly over a range of frequencies.
For example, a hot object might give off radiation at 10 Mhz, 10.1 MHz, 10.01 Mhz, and so on. Mathematically, there are an infinite number of frequencies between 10 and 11 MHz (or any two frequencies), so if the object radiates energy at every possible frequency, then it will give off an infinite amount of energy.
Hawking explains how, to resolve this problem, Max Planck hypothesized that physical quantities like the frequency of radiation are “quantized,” meaning they can only have certain distinct values. If frequency could only be incremented by a finite value, then an object would only give off a finite amount of radiation because there would only be a finite number of frequencies at which it could give off radiation. This solved the problem and led to the development of the theory of quantum mechanics.
Shortform Note: Standing Waves and Quantization
As Hawking recounts, Planck was the first to recognize that electromagnetic energy was quantized, and Planck may have coined the term “quantum.” However, in Planck’s day, it was already common knowledge that certain physical quantities were “quantized,” in the sense that they could only have certain values.
In particular, the harmonics of standing waves are quantized, as Pythagoras described around 500 BC. If you pluck a guitar string (or any string stretched between two fixed points) it will only vibrate at certain frequencies, called harmonics. This is because the fixed ends of the string constrain it, such that it can only support waves if the length of the string is equal to half the wavelength of the wave, or a whole-number multiple of this length. So, if your guitar string is 24 inches long, it will only vibrate at frequencies that correspond to waves with a wavelength of 48 inches, 24 inches, 16 inches, 12 inches, 9.6 inches, and so on.
Today, physicists often describe an electron’s orbit around an atom’s nucleus as a type of standing wave and use this to explain the quantization of electromagnetic energy.
Quantum Uncertainty
But how does the fact that frequency is quantized give rise to the uncertainty principle? It has to do with the way light disturbs particles.
As Hawking explains, you can see something only if it is reflecting (or otherwise emitting) light. If there’s no light, you won’t be able to see it. The same principle applies to measuring subatomic particles: The instruments that measure their position and velocity can only “see” them by bouncing light (or other particles, like electrons) off of them.
However, according to Hawking, this imposes fundamental limitations on the accuracy of the measurement, because bouncing photons or electrons off of a subatomic particle will change its velocity. The higher the frequency of the light bouncing off a subatomic particle, the more energy its photons have, and the more it will change the velocity of the particle you’re trying to measure. The frequency is also inversely proportional to the wavelength, and the light that bounces off the particle will only indicate its position to the nearest wavelength.
Thus, if you use very high-frequency light, you can measure the particle’s position very accurately, but you’ll disrupt its velocity so much that you get no useful information about its velocity. If you use very low-frequency light, you can measure its velocity accurately, but not its position. If you use an intermediate frequency, you can measure both position and velocity with an intermediate amount of uncertainty, but your total uncertainty will always be at least a certain value.
Measurement Error vs. Quantum Uncertainty It is important to distinguish between ordinary measurement uncertainty and quantum uncertainty. In real life, every measuring device has limited precision. For example, imagine you’re trying to measure the length of a metal rod. If you measure it with a ruler, your measurement is only as accurate as the marks on the ruler. Say your ruler is marked in sixteenths of an inch, so your measurement is only certain to the nearest sixteenth of an inch. If you measure it with a dial caliper instead, you can get a more accurate measurement, but your measurement will still have a few thousandths of an inch of uncertainty. With increasingly precise measuring tools, you can reduce the uncertainty in your measurement. The same principle generally applies to measuring position, velocity, or anything else that you might want to measure: The better your measuring tools, the less uncertainty there will be in your measurement. When it comes to measuring the position and velocity of subatomic particles, the precision of your measuring instruments is still important. But, as we’ve discussed, the uncertainty principle imposes additional limits on your ability to determine both the position and velocity of a particle. So even in the hypothetical case where you had perfect measuring tools, there would still be uncertainty in your measurement (and any measurement uncertainty from your instruments gets added to the quantum uncertainty). |
What Does Uncertainty Even Mean?
Heisenberg interpreted this not just as a measurement limitation but as a fundamental statement about reality itself. According to Adam Becker in his book What Is Real?, the uncertainty principle—the idea that, the more precisely you measure a particle’s position, the less precisely you can know its momentum, and vice versa—was Heisenberg’s approach to the measurement problem of quantum mechanics. This wasn’t due to imperfect instruments but constraints imposed by quantum mechanics.
Why the Uncertainty Principle Is Really About Wave Behavior The uncertainty principle makes more intuitive sense when you visualize quantum particles behaving like waves. Think of a ripple on the surface of a pond, but imagine that you can’t watch it passively. Instead, you have to physically interact with the water to get any information. To measure the wave’s speed, you’d need to place sensors in the water to time how long it takes peaks and troughs to pass between them—but those sensors prevent you from pinpointing the exact location of any single peak without disturbing it. To identify where one peak is, you’d need to place a sensor right at that spot, but this would disturb the wave and prevent you from measuring how quickly it’s moving. Heisenberg realized that measuring quantum particles works the same way: Any attempt to observe them requires physical interaction, which creates “discontinuities” that alter what you’re trying to measure. This fundamental tradeoff doesn’t exist for classical objects, which can theoretically be measured with perfect accuracy if we have perfect instruments. But quantum particles are fundamentally wave-like, and it’s this wave nature that creates the uncertainty described by Heisenberg. |
Becker explains that, like Niels Bohr, Heisenberg took an anti-realist position to the measurement problem, arguing that particles don’t have definite properties until measured. Yet, while Bohr denied that any reality existed between measurements, Heisenberg proposed that particles exist, but only in a realm of “potentialities” rather than actualities. This solution created new puzzles: If particles exist only as potentialities, how do they interact with scientific instruments to produce definite measurements? How can something without actual characteristics cause specific readings? Despite their differences, both Bohr and Heisenberg reached the same conclusion: Questions about what particles are doing between measurements are meaningless.
The Ancient Roots of Quantum Potentialities Heisenberg’s concept of quantum potentialities borrowed from ancient Greek philosophy, specifically Aristotle’s distinction between “potentiality” (dunamis) and “actuality” (energeia). For Aristotle, reality had multiple layers: not just what actually exists, but also what could potentially exist. For instance, an acorn contains the potentiality to become an oak tree; the mature oak is the actualization of that potential. The acorn contains “treeness” as a real, but not yet manifest, aspect of its being. However, becoming a tree isn’t guaranteed; the acorn could become nothing at all. Just as Aristotle argued that the same thing could have contradictory potentialities, but never contradictory actualities, Heisenberg argued that quantum particles exist in superpositions of multiple states until measurement actualizes one of these states. Classical physics assumes objects have only definite, actual properties, but Aristotelian thinking explains how quantum mechanics can describe situations that are impossible in classical physics: It doesn’t deal with classical objects at all, but with things that exist in multiple layers of reality simultaneously. |
Learn More About Quantum Uncertainty
To better understand Heisenberg’s uncertainty principle in the broader context of quantum mechanics, check out Shortform’s guides to the books referenced in this article:
- A Brief History of Time by Stephen Hawking
- What Is Real? by Adam Becker