PDF Summary:What Is Real?, by

Book Summary: Learn the key points in minutes.

Below is a preview of the Shortform book summary of What Is Real? by Adam Becker. Read the full comprehensive summary at Shortform.

1-Page PDF Summary of What Is Real?

Quantum physics is one of science’s most successful theories, yet for nearly a century, physicists have avoided asking what it implies about reality itself. Science writer Adam Becker reveals the reason why: The dominant interpretation of the math underlying quantum physics arose from institutional pressure, political persecution, and the deliberate suppression of alternatives, not through valid scientific debate. In What Is Real?, Becker exposes how physicists abandoned their fundamental mission to understand reality and chose to sidestep quantum mechanics’ profound implications.

Our guide explores the scientific crisis that quantum mechanics created and the social pressures that determined how that crisis was “resolved.” We’ll examine Becker’s argument about how this debate connects to broader questions about scientific truth, and we’ll consider what happened when quantum technologies unearthed these long-buried foundational questions. Along the way, we’ll also examine quantum mechanics’ connections to everything from ancient philosophy to Hollywood blockbusters—and consider what it means to build quantum computers while still being confused about what quantum phenomena tell us about reality.

(continued)...

How the Measurement Problem Forced Physicists to Make a Choice

Becker explains that quantum mechanics seems to demand two different sets of physical laws for identical particles, and which laws apply depends on whether anyone’s watching, as in the double-slit experiment described earlier in this guide. Physicists call this the “measurement problem”—the act of measurement appears to change the rules that govern particles. This creates a puzzle: Where does the transition between one set of rules and the other occur?

(Shortform note: What constitutes a “measurement”—and who qualifies as an observer—in quantum mechanics? Measurement requires an interaction that conveys information about the quantum system, and this interaction forces the system to choose definite states. For example, in the double-slit experiment, the detectors are what interacts with the electrons, revealing which slit they passed through. But as Becker points out, this creates a puzzle: If measurement devices are also made of quantum particles, why do they behave according to laws of classical physics, producing definite results? Physicists don’t know; they haven’t yet defined where the boundary lies between the quantum world and the classical world.)

Schrödinger responded with a thought experiment: Imagine a cat in a box with a Geiger counter and a radioactive atom that has a 50% chance of decaying—triggering a hammer to break a vial of poison. Quantum mechanics says the radioactive atom exists in superposition, both decayed and not-decayed. If quantum mechanics applies universally, superposition extends to the Geiger counter (triggered and not-triggered), the vial (broken and intact), and the cat (dead and alive). Only when you open the box does everything “choose” definite states. Schrödinger thought this ridiculous: Cats are alive or dead regardless of observation. This exposed that either quantum mechanics was incomplete, or reality was stranger than anyone imagined.

(Shortform note: Schrödinger’s thought experiment was largely ignored for decades after he published it in 1935, as scientists and philosophers were troubled by the uncertainty it revealed. Writer Ursula K. Le Guin rediscovered it around 1972 and was fascinated—her 1974 short story “Schrödinger’s Cat” launched the thought experiment into mainstream consciousness. Le Guin saw a connection between fantasy literature and physics: Both require rejecting common sense explanations and embracing a radical, even imaginative, uncertainty about reality. Le Guin argued that fantasy and science share a fundamental willingness to question whether things have to be the way they are.)

Three Responses to the Measurement Problem

Becker explains that physicists developed three answers to the problem. Einstein and other realists insisted that quantum mechanics must be incomplete: that particles have properties the theory fails to describe. Bohr and the anti-realists suggested that particles don’t have properties until measured, which makes questions about unmeasured reality meaningless. Heisenberg, also an anti-realist, argued particles exist as “potentialities” until measurement makes them actual. By 1927, these crystallized into two competing visions: Realists insisted physics must describe an objective world that exists independently of observation, while anti-realists saw quantum mechanics as a tool for organizing experimental results rather than describing reality.

(Shortform note: Becker discusses the debate between realists and anti-realists over whether science describes reality or just organizes our observations. Yet QBism, a radical interpretation of quantum mechanics, suggests that this debate misses the point. In the same way that expressionist artists abandoned literal representation around the time that quantum mechanics developed—shifting from depicting objects as they appeared to expressing subjective encounters with those objects—QBIsm suggests that quantum mechanics may describe our relationship with nature rather than nature itself. This implies that we engage with the world through interaction and interpretation, not as an independent third-person observer.)

Einstein’s Realist Position: Quantum Mechanics Must Be Incomplete

Einstein, who’d contributed to quantum theory, found other physicists’ interpretations of the math unsatisfactory. Becker explains that Einstein objected to abandoning a reality that exists independently of observation. He believed science should describe the world as it really is and argued that if quantum mechanics described situations like Schrödinger’s cat, the theory must be incomplete.

Einstein aired this objection in a thought experiment about two particles that bounce off each other. If you measure one particle’s position and momentum after the collision, that instantly determines the other’s properties, regardless of the distance between them. However, according to quantum mechanics, the other particle can only exist as a probability wave until it’s directly observed. So, either that particle has properties (momentum and position) that quantum mechanics doesn’t describe, or nature violates the principle of locality—the idea that objects can only be influenced by their immediate surroundings. Because of this, Einstein concluded that quantum mechanics couldn’t represent the final truth about reality.

(Shortform note: Einstein’s principle of locality says that influences between distant objects must travel through space between those objects and take time to do so—like the delay between flipping a light switch and the electrical signal reaching a lamp. But quantum mechanics predicts that measuring one particle can instantly affect its distant partner, as if flipping a switch in New York could instantly turn on a light in Tokyo, without any physical connection between them. In addition to the problem Becker describes, this also troubled Einstein because it conflicted with his theory of relativity, which says nothing can travel faster than the speed of light.)

Einstein believed future developments would reveal quantum mechanics to be a statistical approximation of some deeper, more complete theory. Becker explains that in Einstein’s mind, this deeper theory could restore both locality and objective reality while preserving quantum mechanics’ practical successes.

Is It Possible to Find a Theory of Everything?

As Becker explains, Einstein envisioned a unified theory that would resolve the conflicts between relativity and quantum mechanics. The search for a “Theory of Everything” has captivated physicists for nearly a century, but some scientists question whether it’s a realistic goal. This theory would unify the four forces that govern everything in the universe: electromagnetism (which holds atoms together), the strong nuclear force (which binds particles in atomic nuclei), the weak nuclear force (which causes radioactive decay), and gravity. Currently, quantum mechanics explains the first three forces but fails to account for gravity, which Einstein’s general relativity describes instead.

Einstein spent 30 years pursuing this goal. But in Lost in Math, physicist Sabine Hossenfelder argues the search rests on an unscientific premise: the assumption that the laws of nature should be elegant and unified just because physicists find such theories mathematically pleasing. The problem isn’t that we lack the mathematical sophistication to explain the complexity of the universe, but that we may be chasing an idealized vision of that universe that’s just an illusion.

Bohr’s Anti-Realist Position: Questions About Unmeasured Reality Are Meaningless

Unlike Einstein, Bohr’s response was to abandon the goal of making physics describe objective reality. As Becker explains, Bohr’s principle of complementarity held that certain pairs of properties can’t be observed at the same time, and that physicists needed both wave and particle descriptions to fully explain the world: Different experiments would reveal that light and matter have both of these “complementary” aspects, but they never apply at the same time. Further, Bohr argued that particles don’t have definite properties independent of measurement, so asking about where they are or what they’re doing when nobody is measuring them is meaningless. In sum, he concluded that quantum phenomena aren’t independently real.

(Shortform note: Bohr’s interpretation means that every object in the universe has both wave and particle properties. If so, then even a human has a measurable wavelength, though it’s too tiny to detect, and particles have wavelengths and create interference patterns just like light waves. But what actually causes this wave behavior to show up in physicists’ experiments? Bohr’s answer is that there’s nothing physically causing the wave behavior: What’s waving is a probability rather than a physical reality. For example, in the double-slit experiment, the electron exists in a wave of uncertainty representing all the places it could be, and this probability wave interferes with itself until the electron “decides” where to land.)

Becker points out that Bohr’s interpretation created a divide in the anti-realist view of the world: There was a classical realm of real measurement devices and concrete experimental outcomes, and a quantum realm existing only as a mathematical formalism, not an independent reality. Bohr dismissed questions about what happens in the absence of observation, arguing that physics should focus on experimental results, not speculate about what’s unobservable. This let physicists use quantum mechanics without confronting its interpretive puzzles. Rather than asking what the mathematics meant about reality, they could just use it to predict experimental outcomes and leave philosophical questions aside.

(Shortform note: Physicist Katie Mack, author of The End of Everything (Astrophysically Speaking), explains that physics has only ever created mathematical models to accurately predict observations; it doesn’t necessarily reveal the truth about reality. For example, Newton’s equations predict planetary motion, but they don’t tell us what gravity really is—they just make calculations possible. Focusing on what works rather than what it means, as Bohr did, has allowed physics to move forward and inspired advances in abstract math. Mack argues that abstraction is “the whole point” of physics: creating models that explain what we observe, whether or not they describe the universe as it really is.)

Heisenberg’s Uncertainty Principle: A Different Kind of Anti-Realism

Heisenberg approached the measurement problem through his uncertainty principle: the idea that the more precisely you measure a particle’s position, the less precisely you can know its momentum, and vice versa. This wasn’t due to imperfect instruments but constraints imposed by quantum mechanics.

Why the Uncertainty Principle Is Really About Wave Behavior

The uncertainty principle makes more intuitive sense when you visualize quantum particles behaving like waves. Think of a ripple on the surface of a pond, but imagine that you can’t watch it passively. Instead, you have to physically interact with the water to get any information. To measure the wave’s speed, you’d need to place sensors in the water to time how long it takes peaks and troughs to pass between them—but those sensors prevent you from pinpointing the exact location of any single peak without disturbing it. To identify where one peak is, you’d need to place a sensor right at that spot, but this would disturb the wave and prevent you from measuring how quickly it’s moving.

Heisenberg realized that measuring quantum particles works the same way: Any attempt to observe them requires physical interaction, which creates “discontinuities” that alter what you’re trying to measure. This fundamental tradeoff doesn’t exist for classical objects, which can theoretically be measured with perfect accuracy if we have perfect instruments. But quantum particles are fundamentally wave-like, and it’s this wave nature that creates the uncertainty described by Heisenberg.

Becker explains that, like Bohr, Heisenberg took an anti-realist position, arguing that particles don’t have definite properties until measured. Yet while Bohr denied that any reality existed between measurements, Heisenberg proposed that particles exist, but only in a realm of “potentialities” rather than actualities. This solution created new puzzles: If particles exist only as potentialities, how do they interact with scientific instruments to produce definite measurements? How can something without actual characteristics cause specific readings? Despite their differences, both Bohr and Heisenberg reached the same conclusion: Questions about what particles are doing between measurements are meaningless.

The Ancient Roots of Quantum Potentialities

Heisenberg’s concept of quantum potentialities borrowed from ancient Greek philosophy, specifically Aristotle’s distinction between “potentiality” (dunamis) and “actuality” (energeia). For Aristotle, reality had multiple layers: not just what actually exists, but also what could potentially exist. For instance, an acorn contains the potentiality to become an oak tree; the mature oak is the actualization of that potential. The acorn contains “treeness” as a real, but not yet manifest, aspect of its being. However, becoming a tree isn’t guaranteed; the acorn could become nothing at all.

Just as Aristotle argued that the same thing could have contradictory potentialities, but never contradictory actualities, Heisenberg argued that quantum particles exist in superpositions of multiple states until measurement actualizes one of these states. Classical physics assumes objects have only definite, actual properties, but Aristotelian thinking explains how quantum mechanics can describe situations that are impossible in classical physics: It doesn’t deal with classical objects at all, but with things that exist in multiple layers of reality simultaneously.

Why Bohr’s Anti-Realism Prevailed

Becker contends that the measurement problem should have started a debate that didn’t stop until answers emerged. Instead, physicists accepted Bohr’s anti-realism—not because it offered a compelling solution to the problems posed by quantum mechanics, but because world events and institutional forces made pursuing answers professionally dangerous. The textbook story is that physicists agreed on a new interpretation of quantum mechanics at the 1927 Solvay Conference. But Becker argues this story is false. The debate revealed no unified position among Bohr’s supporters, just an alliance of opposition to Einstein’s realism. Only decades later would this collection of anti-realist views be labeled the “Copenhagen interpretation.”

There were two other reasons that anti-realism prevailed. First, physics evolved from a philosophical discipline into a massive military enterprise. During World War II, thousands of physicists worked on the Manhattan Project, the US’s program to build atomic bombs. After the war, military funding continued pouring into physics to develop weapons, radar systems, and other technologies. This meant physicists spent their time completing practical calculations rather than solving the theoretical puzzles that Einstein and Bohr’s generation debated.

Second, physicists who attempted to develop realistic alternatives to the Copenhagen interpretation faced career destruction. Becker reports that those who proposed viable interpretations were dismissed without serious scientific engagement and often lost their chances of finding academic employment because they didn’t “toe the line.” By the 1960s, the physics community had stopped asking hard questions about the meaning of quantum mechanics, treating this abandonment of foundational inquiry as scientific maturity rather than intellectual failure.

How Scientific and Moral Reasoning Intersected

Other experts agree with Becker that the Copenhagen interpretation was less a coherent position on what quantum mechanics meant and more a coalition of opposition to Einstein’s realism. Beyond rejecting Einstein’s realism, physicists disagreed on basic issues: Some physicists emphasized that consciousness must play a role in measurement, while others rejected this idea. Jim Baggott (Quantum Drama) argues that the dominance of the thinking, despite such disagreements, reflected a culture of indifference toward interpretive questions, especially among American physicists.

But if physicists were pressured to treat questions about what was real as unscientific, might this have also made it easier to avoid asking what was morally right as World War II turned physics into a military operation? American physics culture was pragmatic and anti-philosophical even before the war. During the war, relatively few Manhattan Project scientists interrogated the moral implications of their work. The uncertainty about quantum mechanics’ meaning might have blurred a sense of moral responsibility for the theory’s material consequences.

Foundational Questions Survived the Forced Consensus

Despite decades of institutional hostility, the fundamental questions about quantum mechanics’ meaning proved impossible to eliminate. Becker explains that experimental breakthroughs and theoretical innovations gradually rehabilitated foundational research—and revealed that the same questions troubling Einstein and Schrödinger remained unresolved, creating ongoing tensions about science’s ultimate purpose and the nature of reality itself.

Bell’s Theorem Transforms Philosophy Into Experiment

The return to these fundamental questions began in 1964 with John Bell, who was skeptical about a mathematical argument that had supposedly proven the Copenhagen interpretation was the only possible approach to quantum mechanics. This argument was John von Neumann’s 1932 “impossibility proof,” which claimed to show that no theory with “hidden variables,” where particles have definite properties before measurement, could reproduce quantum mechanics’ predictions. As Becker notes, this seemed to prove that realist positions like Einstein’s were mathematically impossible: If particles can’t have definite properties before measurement, then only anti-realist interpretations like Copenhagen could be correct.

But Bell discovered that von Neumann’s proof was flawed. Bell then reconsidered Einstein’s thought experiment and transformed his philosophical concerns into mathematical tests. Bell reasoned that if particles do have definite properties before measurement, then measurements on entangled particles should obey certain mathematical constraints on how strongly correlated the results can be, which became known as “Bell’s inequalities.” Quantum mechanics predicts that entangled particles will violate these limits, so this gave physicists a decisive test, and experiments in 1972 and 1982 proved that particles violate Bell’s inequalities. This meant Einstein was right about “spooky action at a distance” being real, but wrong about quantum mechanics being incomplete. The theory wasn’t missing information; reality really was nonlocal.

Why Bell Thought Einstein Was Right to Be Worried

For decades, physicists dismissed Einstein’s concerns about quantum mechanics by citing von Neumann’s proof. But the proof contained a fatal flaw: It assumed that hidden variables must behave according to a “linearity” rule that sounds reasonable mathematically but makes no physical sense. Specifically, von Neumann required that if you could measure two quantum properties A and B separately, then you should also be able to measure their mathematical combination (A + B). But in quantum mechanics, many properties can’t be measured simultaneously, so demanding that their combination be measurable is physically meaningless.

The experiments that proved particles violate Bell’s inequalities showed that particles do have some form of objective reality—supporting realism—while confirming that “spooky action at a distance” really exists. This supported Einstein’s concerns about quantum mechanics’ disturbing nonlocal implications: Locality is fundamental to our spatial understanding of the world—it lets us identify different locations and treat them as independent. Einstein worried that abandoning locality meant abandoning our framework for understanding cause and effect. Bell’s theorem proved that reality really is as strange as Einstein feared, validating his intuition that such strangeness deserved serious concern rather than casual acceptance.

Physicists Developed Three Alternative Paths Forward

Becker explains that Bell’s theorem forced physicists to face a choice: Abandon the principle of locality (and accept the idea of instant connections across space), abandon realism (and accept that properties don’t exist before measurement), or abandon the idea that quantum mechanics is complete. Three alternative interpretations represent different responses to this choice. They all revolve around the question of what causes wave function collapse—the moment when the “probability wave” of a quantum particle’s potential location and momentum “collapses” into the specific characteristics it takes on when it’s observed and measured.

Many-Worlds: Preserve Everything by Multiplying Universes

The many-worlds interpretation offers one escape route: Physicists could preserve both locality and realism by abandoning the assumption that only one outcome occurs. Becker explains that in this view, wave functions never collapse. Instead, all possible measurement outcomes happen in parallel branches of reality. This dissolves Bell’s dilemma by denying there’s a single definite result to correlate across space. When you measure an entangled particle, you don’t get just one outcome; instead, you experience all possible outcomes. The apparent nonlocality results from observers’ limited perspective: We only see one branch of reality while remaining unaware of countless others.

Becker points out that under this interpretation, Schrödinger’s cat is both alive and dead, but in separate branches of reality. The measurement problem vanishes because measurements don’t force choices—they simply reveal which branch of the universal wave function we happen to be experiencing.

Many-Worlds Goes Hollywood

The film Everything Everywhere All at Once dramatizes the idea of the multiplying universe—and imagines how becoming aware of these multiple worlds could feel liberating or devastating. In the film, laundromat owner Evelyn discovers she can access memories and skills from alternate versions of herself across the multiverse—versions where she became a movie star, a chef, or even a being with hot dog fingers. Her daughter Joy, pushed too far into multiverse consciousness, experiences all possible versions of herself simultaneously. This overwhelming perspective leads Joy to conclude that nothing matters since every possible outcome occurs somewhere, which drives her toward nihilistic self-destruction.

This portrayal of Joy parallels the real-life trajectory of many-worlds physicist Hugh Everett III. After developing his interpretation of quantum mechanics, Everett abandoned academic science, became an alcoholic defense contractor working on nuclear war scenarios, and died at 51, leaving instructions for his ashes to be thrown into the garbage. Like Joy, Everett seemed crushed by the implications of his discovery. The film gives Evelyn a different way forward, when she learns to come to terms with the multiplicity of her existence through compassion, which some critics see as a nod to Buddhist philosophy.

In fact, Buddhism aligns with what some physicists think the many-worlds interpretation really means: They argue the multiverse doesn’t continually split into new universes. Instead, all possible universes already exist in a “universal wave function.” Rather than creating infinite new realities with each move we make, we’ve always been part of this infinite reality. This mirrors Buddhists’ idea that our sense of being separate individuals is an illusion, and we’re really part of a vast, interconnected whole. The film suggests recognizing this vastness need not lead to nihilism, but can inspire us to engage fully with whatever branch of existence we happen to inhabit.

Pilot-Wave Theory: Accept Nonlocality, Restore Objective Reality

The pilot-wave interpretation takes a different approach: Accept Bell’s proof of nonlocality while restoring the objective reality Einstein sought. Becker notes that according to this view, particles always have definite positions and properties, and they’re guided by “pilot waves” that can influence distant particles instantly. This eliminates the measurement problem by removing the need for wave function collapse. Particles follow definite trajectories determined by waves, and measurements reveal where particles are. There’s no mystery about obtaining definite results: The particles detected in any experiment existed in definite states all along; we just didn’t know which ones until we measured them.

In the double-slit experiment, for example, each electron takes a definite path through one slit or the other, but the pilot waves go through both slits and create the interference patterns that guide where electrons can land on the detection screen. This explains the wave-like results without requiring particles to somehow pass through multiple slits simultaneously. Becker explains that the price is explicit nonlocality: Pilot waves connecting entangled particles provide the “spooky action at a distance” that Bell proved was unavoidable. Many physicists find this disturbing, but the interpretation at least makes the nonlocal connections explicit rather than hiding them within the measurement process itself.

Invisible Connections Across Vast Distances

Pilot-wave theory might sound impossibly strange, but nature already shows us how invisible waves can carry information across vast distances. Humpback whale songs first recorded off eastern Australia later appear among whale populations in French Polynesia, then Ecuador, traversing vast stretches of the Pacific Ocean. The songs travel through ocean “sound channels” created by temperature and pressure gradients that allow sound waves to bounce up and down across thousands of miles without losing energy.

Like the “spooky action at a distance” that links entangled particles across space, whale songs carry cultural information between cetaceans who may never meet. If whale songs traveled at the speeds of the proposed “pilot waves” that connect entangled particles, a whale singing in Australia could be heard instantaneously by whales in Ecuador, influencing their behavior just as nonlocal quantum particles instantly affect each other.

Spontaneous Collapse: Modify the Mathematics

Spontaneous collapse theories take a third approach: They modify quantum mechanics to make wave function collapse a natural physical process rather than something mysterious triggered by measurement. These theories propose that wave functions randomly collapse on their own, with larger objects collapsing much more frequently than individual particles. Becker explains that this preserves both locality and objective reality by making collapse happen randomly rather than through nonlocal measurement interactions. Individual particles might remain in superposition for billions of years, but macroscopic objects that contain countless particles resolve into definite states almost instantly as random events accumulate.

According to Becker, this approach dissolves the measurement problem by eliminating the need for special measurement processes—collapse happens naturally through the theory’s modified dynamics. Schrödinger’s cat wouldn’t remain in a “both dead and alive” state of superposition for more than a split second because random wave function collapse would quickly force a definite outcome.

Why “Fixing” Quantum Mechanics by Adding Randomness Might Not Work

As Becker explains, spontaneous collapse theories make wave function collapse a natural process, but experiments have dealt blows to these theories. The problem is that if spontaneous collapse really occurs, the random collapse process should cause charged particles to constantly jiggle around, emitting detectable X-ray radiation. But ultra-sensitive detectors in underground laboratories designed for neutrino research have found no evidence of this.

The irony is that these theories were designed to eliminate quantum mechanics’ weird aspects, but they do so by adding fundamental randomness to the universe’s basic laws, showing that sometimes the “cleanest” theoretical solution creates more problems than it solves.

Why These Questions Matter

Becker reports that the revival of research into the interpretation of quantum mechanics coincided with the emergence of quantum technologies, which exploit the same strange phenomena—superposition, entanglement, and nonlocality—that created the original crisis. Quantum computers derive their power from superposition, performing many calculations simultaneously. Quantum cryptography exploits entanglement to create secure communications. Every successful quantum technology validates quantum mechanics’ mathematical accuracy while highlighting how little we understand what that mathematics means about reality.

Becker argues that dismissing the questions posed by quantum mechanics represents a retreat from physics’ mission to understand the nature of reality. The measurement problem touches the core of our most successful scientific theory and may prove essential for developing theories capable of unifying quantum mechanics with gravity and cosmology.

Why We Still Don’t Know “What Is Real”

The success of quantum computing illustrates Becker’s point that we can build technologies that take advantage of quantum phenomena while remaining confused about what this means for the nature of reality. Today’s quantum computers contain more than 1,000 qubits, which can represent 0 and 1 simultaneously, unlike classical bits that are always one or the other. But these machines remain highly error-prone and can only run calculations for brief periods before quantum effects collapse, forcing the qubits to behave like ordinary classical bits. Despite these limitations, quantum computers can already solve certain kinds of problems exponentially faster than the world’s most powerful supercomputers.

This creates a paradox. We’re spending billions of dollars to construct machines that exploit quantum phenomena that the Copenhagen interpretation claims have no objective reality. Yet technologists are betting entire industries on the assumption that superposition and entanglement are real enough to manipulate for practical computation. The confusion deepened when Google claimed that the success of its 2024 Willow chip supported the idea that quantum computers work by processing information across multiple parallel universes. This interpretation suggests each quantum calculation occurs simultaneously in different realities, with quantum computers essentially outsourcing work to parallel dimensions.

But many physicists reject this explanation. Ethan Siegel (Beyond the Galaxy) notes that quantum computers work perfectly well using quantum mechanical properties within our single universe; no parallel dimensions are required. The multiverse debate might also miss the deeper point: Quantum computing’s success demonstrates that something real is happening, even if we can’t agree on what. Nearly a century after quantum mechanics’ development, we still face the same puzzle that troubled Einstein, Bohr, and their contemporaries—our mathematics works with extraordinary precision, yet we remain as confused as ever about what it tells us about the universe we inhabit.

Want to learn the rest of What Is Real? in 21 minutes?

Unlock the full book summary of What Is Real? by signing up for Shortform .

Shortform summaries help you learn 10x faster by:

  • Being 100% comprehensive: you learn the most important points in the book
  • Cutting out the fluff: you don't spend your time wondering what the author's point is.
  • Interactive exercises: apply the book's ideas to your own life with our educators' guidance.

Here's a preview of the rest of Shortform's What Is Real? PDF summary:

Read full PDF summary

What Our Readers Say

This is the best summary of What Is Real? I've ever read. I learned all the main points in just 20 minutes.

Learn more about our summaries →

Why are Shortform Summaries the Best?

We're the most efficient way to learn the most useful ideas from a book.

Cuts Out the Fluff

Ever feel a book rambles on, giving anecdotes that aren't useful? Often get frustrated by an author who doesn't get to the point?

We cut out the fluff, keeping only the most useful examples and ideas. We also re-organize books for clarity, putting the most important principles first, so you can learn faster.

Always Comprehensive

Other summaries give you just a highlight of some of the ideas in a book. We find these too vague to be satisfying.

At Shortform, we want to cover every point worth knowing in the book. Learn nuances, key examples, and critical details on how to apply the ideas.

3 Different Levels of Detail

You want different levels of detail at different times. That's why every book is summarized in three lengths:

1) Paragraph to get the gist
2) 1-page summary, to get the main takeaways
3) Full comprehensive summary and analysis, containing every useful point and example