
Nicholas Carr’s book Superbloom: How Technologies of Connection Tear Us Apart argues we’ve traded physical existence for hyperreality—a state where digital simulations such as Instagram feeds and AI-generated content carry more weight than real-life experiences. From the erosion of empathy to the rise of political polarization, Carr explores how algorithmic platforms exploit our psychology, reshaping our brains and undermining the foundations of democracy.
Keep reading to discover why Carr believes the only way to reclaim our humanity and deep-thinking skills is through individual resistance and a deliberate return to the “friction” of the physical world.
Overview of Superbloom by Nicholas Carr
In his book Superbloom, Nicholas Carr argues that we now live in what philosophers call “hyperreality”—a state where digital simulations have replaced physical reality, with serious consequences. We experience the world through screens: understanding our friends through their posts rather than in conversation, or watching concerts through our phones’ cameras rather than with our eyes. Carr argues this shift has contributed to a mental health crisis, eroded our capacity for empathy and attention, fragmented our identities, and fueled political polarization. He believes that every new communication technology—from the telegraph to social media—has made the same false promise of better connection, but has instead delivered division and conflict.
The title of Carr’s book comes from a 2019 incident when heavy rains produced a California poppy bloom that went viral on Instagram, and thousands of people traveled to photograph themselves in the poppy fields. For these people, the digital recording of the experience took precedence over the experience itself—in this state of hyperreality, capturing and sharing images to accumulate likes and comments trumped the reality of witnessing a natural wonder.
Carr has been writing about tech’s effects on human consciousness since 2008, when The Atlantic published his essay, “Is Google Making Us Stupid?” His 2010 book The Shallows warned that digital media is reshaping our cognition. His 2025 book Superbloom: How Technologies of Connection Tear Us Apart extends this analysis into the era of social media and AI, challenging Silicon Valley’s optimism about these tools and critiquing the assumption that better regulation can fix their flaws. In fact, he argues we keep building communication technologies that harm us because we misunderstand human nature.
This overview organizes Carr’s arguments around four sections:
- What hyperreality is and how the features of our digital platforms created it
- How hyperreality affects us
- How hyperreality came about, tracing the historical pattern of misplaced optimism and the psychological vulnerabilities that left us susceptible to exploitation
- How we should respond—Why Carr believes individual resistance through embodied experience offers the best path forward
What Is Hyperreality?
Carr opens Superbloom by diagnosing the social condition we now inhabit: We’re living in what he calls a perpetual superbloom—an endless flood of digital messages, images, and interactions that has altered how we experience reality. We’ll explore what this state looks like, how it reshapes our minds and relationships, and why Carr believes it threatens the foundations of democratic society.
We’re Living in a Simulated World
Carr explains that we increasingly experience life indirectly through our digital screens. We get our news through algorithmically-curated feeds. We learn about our friends through their social media posts instead of having actual conversations. We watch concerts through our phone cameras, capturing videos we may never rewatch instead of enjoying the performance with our eyes and ears. We choose restaurants, hotels, and experiences based on how well they’ll photograph rather than how much we’ll enjoy them. French philosopher Jean Baudrillard coined the term “hyperreality” in the 1970s to describe this state, where simulations of things in the real world replace our direct experience of reality.
In hyperreality, the simulation doesn’t just help us interpret reality: It becomes the primary object of our attention, replacing the thing it originally stood for. Carr cites the 2019 poppy bloom to illustrate how this works. When heavy rains produced a spectacular bloom in Walker Canyon, southeast of Los Angeles, influencers’ photos of themselves among the orange flowers went viral with the hashtag superbloom. Thousands of people traveled not to see the poppies, but to photograph themselves in the canyon and share the images online: to perform their presence for an audience. Traffic overwhelmed local roads, visitors trampled flowers, a police officer was injured, and authorities declared a public safety emergency.
What Carr finds remarkable about the 2019 superbloom is that the physical flowers existed primarily as a backdrop for a digital event. Digital stand-ins—the hashtag, the photos, the likes, and the comments—became more real than the actual landscape. Now, Carr argues that we all live in a perpetual superbloom. The digital world offers endless stimulation: infinite scrolling feeds, autoplay videos, and algorithmic recommendations that anticipate your desires. By comparison, physical reality seems boring and underwhelming, with all the frictions of material existence. As we learn to function in the simulated environment, hyperreality changes us, reshaping how we think, relate to others, and function as a democratic society.
How Digital Systems Created Hyperreality
Hyperreality didn’t emerge by accident. Carr argues that specific features built into digital platforms systematically removed the constraints that had limited earlier communication technologies, creating an environment perfectly suited to exploit human psychological vulnerabilities.
First, everything competes for attention. Carr notes that, in the past, different kinds of information arrived through different media. Letters, newspapers, telephone calls, television broadcasts, and books each had a distinct physical form and social meaning that helped you organize and prioritize information. Digital technology eliminated these distinctions. Everything became data competing for attention on the same terms—serious news stories compete with cat videos or conspiracy theories on the same playing field. Without these distinctions, engagement metrics reign supreme, and what gets the most engagement are things that spark the strongest emotions, not those that are important or even true.
Facebook’s News Feed, introduced in 2006, automated this competition for engagement and attention. Carr explains that, previously, you had to deliberately click through other users’ profiles to see their content. The News Feed changed this: Algorithms now evaluate and rank information without regard to meaning, using statistical analysis of your past behavior to predict what will capture your attention. Facebook’s internal research confirmed that the platform exploits divisiveness because divisive content generates more engagement, which generates more advertising revenue. This means algorithms shape what you see based on the commercial interests of digital media companies, without regard for democratic values or your well-being.
Second, digital platforms removed natural constraints on information and stimulation. Carr explains that physical environments limit the seeking drive—our dopamine-fueled impulse to explore. There’s only so much to see in a physical space before the novelty wears off and your mind settles. The same was true for communication delivered through the physical world, like letters that took days to arrive, creating natural pauses in a written conversation. But digital platforms eliminated these constraints through features such as infinite scroll and autoplay. Each acceleration of communication—from letters to telegraphs to email to social media—removed more time for deliberation between messages.
Third, identity became a constant performance. Carr explains that, before social media, social interactions occurred within bounded contexts—distinct places and times that separated different social situations. You performed different versions of yourself for family, colleagues, and friends, with transitions between these contexts giving you intervals of solitude. Social media dissolved these boundaries. All your social contexts now exist simultaneously and continuously. There’s no “offstage” where you can retreat from social demands. You must either adopt a single rigid public persona or juggle multiple personas. The result is that your identity becomes fragmented across countless reflections rather than feeling stable and coherent.
How Does Hyperreality Change Us?
Carr argues that, when digital platforms collapse all content into a single competitive stream, eliminate natural constraints on stimulation, and dissolve the boundaries between social contexts, they create the infrastructure of hyperreality. Now we can examine what living in this system does to us as individuals, to our relationships, and to our ability to function as a democratic society.
Hyperreality Reshapes How We Think
Carr reports that living in hyperreality changes how we think in ways that worsen our mental health and undermine our humanity. He notes that social media creates an environment where you’re constantly comparing yourself to others, living in fear of missing out, feeling pressure to manage your image, and losing sleep as you scroll endlessly. To illustrate this, Carr writes that, between 2010 and 2019, as smartphones and social media became ubiquitous, depression rates among American teenagers doubled. Anxiety disorders surged, and suicide rates and hospitalizations for self-harm rose. The same pattern played out across dozens of countries, evidence that social media’s demands on how we think aren’t healthy.
Carr writes that, in addition to creating unhealthy patterns of thinking, living in hyperreality erodes your capacity for sustained attention and deep thought. With constant stimulation, there’s always another post, video, or notification to grab your attention. When your feed constantly refreshes and social norms demand immediate reactions, there’s no time for careful reflection or deliberative, critical thinking.
Perhaps most troubling, Carr contends that spending most of your time engaged with screens limits what kind of thinking you’re capable of. He explains that our ability to understand the world develops through physical interaction with it: through touch, movement, spatial navigation, and unmediated sensory experience. When we bypass this embodied learning and interact with digital representations instead, we become skilled at recognizing patterns and recombining existing ideas, but we lose the capacity to create genuinely new understanding. Like AI systems that mix and match what they’ve seen in their training data, we risk becoming derivative thinkers who can’t generate original insights.
Hyperreality Undermines Human Connection
Beyond reshaping how we think, hyperreality also transforms how we relate to other people, and not for the better. Carr argues that digital communication undermines human connection—ironically, by making one-way connection easier. We intuitively believe that getting to know people better makes us like them more, but research finds the opposite pattern: In studies designed to mimic the kind of one-sided information disclosure that happens as we follow people on social media, participants received varying amounts of trait information about fictional people——and the more they learned, the less they liked the person being described.
This occurs through what researchers call dissimilarity cascades: You tend to like those who seem similar to you and dislike those who appear different. Crucially, the disliking tendency proves stronger. Once you encounter a significant dissimilarity, you interpret subsequent information as further evidence of difference while similarities fade into the background. Social media, where people continuously broadcast their personal information and opinions, creates perfect conditions for these dissimilarity cascades. As you encounter endless streams of information about people’s political views, religious beliefs, parenting philosophies, and consumer preferences, the differences accumulate.
Carr argues the damage might be contained if social media at least fostered empathy to counterbalance this dynamic. But to empathize with other people, we need to pay sustained attention to them and communicate within the close proximity needed to read another person’s physical cues—their facial expressions, tone of voice, and body language. Communication through screens lacks these elements, making it nearly impossible to develop the attention and observational skills that enable us to practice empathy for others.
Hyperreality Threatens Democracy
The effects of hyperreality extend beyond individual psychology and personal relationships to threaten democratic governance itself. Carr argues that democracy requires some degree of shared reality—agreement on basic facts, common information sources, and the ability to distinguish truth from falsehood. These foundations crumble in hyperreality.
On digital platforms, all information competes for attention on the same terms, with algorithms delivering whatever captures attention most effectively without regard for its importance or accuracy. Serious policy discussions must compete with outrage bait and conspiracy theories. Because your mind uses repetition as a proxy for truth—what you encounter repeatedly feels familiar, and familiarity signals reliability—repeated exposure to false stories makes them feel true. The result is a fragmented information environment where different groups inhabit different realities and reach incompatible conclusions about basic questions.
Carr warns that artificial intelligence threatens to make this fragmentation even more severe. He notes that generative AI systems can now create deepfakes: images, videos, and audio nearly indistinguishable from authentic recordings. When you can no longer trust visual or audio evidence, you may begin doubting all information presented through any form of media. This shift would undermine democratic accountability, and whoever controls the technology for producing synthetic media will gain unprecedented power to shape our collective understanding of what’s real.
Why We Built a System That Harms Us
The hyperreality Carr describes wasn’t an inevitable result of technological progress; it stemmed from a specific mistake we’ve made repeatedly: believing that more communication would produce greater understanding. We’ll explore why we keep making this mistake and how our psychology leaves us vulnerable to technologies designed to exploit our cognitive limitations.
We Believed Communication Would Unite Us
Carr shows that misplaced optimism about communication technology stretches back over 150 years. The telegraph was expected to abolish war, radio was to usher in tolerance, television was to unite all peoples, the internet would democratize information, and social media was to build a global community. This hope persisted because it seemed intuitively obvious that, if we could just communicate more effectively with each other, we would understand each other better.
In 1922, journalist Walter Lippmann challenged the assumption that better communication leads to better understanding by questioning whether communication could ever produce unity when people perceive reality through different lenses. He argued that modern society had become too complex for individuals to grasp directly, and that we necessarily construct what he called “pseudo-environments”—simplified mental models filtered through our limited information and biases. If everyone operates from their own distorted understanding of reality, Lippmann reasoned, more communication won’t create unity; people will just talk past each other.
We Misunderstood Human Nature
Modern psychological research has vindicated Lippmann’s skepticism by identifying the specific mechanisms behind our cognitive limitations. Carr notes that we have bounded rationality; we can’t process unlimited information, so we rely on mental shortcuts to navigate complexity despite our finite attention and processing capacity. Psychologists distinguish between two modes of thinking—fast, intuitive judgments; and slow, deliberate analysis. Social media’s speed and volume favor the fast mode, forcing you to rely on gut reactions instead of careful reasoning.
Carr’s argument is that these cognitive limitations aren’t bugs that better technology can fix—they’re fundamental features of human psychology that any communication system must account for. The problem is that we designed digital systems as if these limitations don’t exist, or as if exposing people to more information faster would somehow overcome them. We misunderstood ourselves, and that misunderstanding allowed us to build systems that exploit our weaknesses.
Personal Resistance to Hyperreality Is Our Best Hope
Given the depth of the problem Carr sees—a technological system that was built on misplaced optimism that now exploits our psychological vulnerabilities at a massive scale—what can we do about it? We’ll examine why Carr believes top-down solutions such as regulation and antitrust enforcement will likely fail—and why he argues that individual acts of resistance grounded in physical reality offer the most promising, if modest, path forward.
Top-Down Solutions Will Likely Fail
Many critics of social media advocate regulatory changes to fix the system’s worst harms. Some propose what they call “frictional design,” which would encourage more thoughtful behavior by deliberately reintroducing inefficiencies into platforms. This might include adding delays before new posts appear, limiting how many times messages can be forwarded, adding extra clicks to like or reply, or even banning infinite scrolls, autoplay functions, and personalized feeds. The logic is appealing: If removing friction created the problem, adding it back might solve it.
But Carr argues that measures to reintroduce friction will likely fail for several reasons. First, history shows that, once people adapt to greater efficiency, reductions in efficiency feel intolerable. In a culture used to instant gratification, frictional design would be nearly impossible to sell to users. Second, political obstacles make effective regulation unlikely, and many people would see such interventions as government overreach. Third, complex technological systems become nearly impossible to change once they’re deeply embedded in society because changing them causes too many disruptions for too many people. Carr contends that the moment to shape the internet’s development was in the 1990s, and that moment has passed.
Carr thinks that other proposed solutions face similar limitations. Breaking up tech companies through antitrust actions might increase competition, but it won’t stop the next generation of companies from facing the same incentives and reproducing similar problems. Similarly, content moderation addresses symptoms rather than causes: The issue isn’t just harmful content but that algorithms promote whatever generates engagement, and users themselves create the demand for divisive material.
There’s also what Carr calls the media absorption effect: Even resistance to the system gets absorbed by it. When people criticize social media on social media, the system neutralizes their opposition by turning it into another form of engagement.
We Need to Reconnect With Physical Reality
If society-level solutions won’t work, what’s left? Individual resistance can’t fix a fragmented democracy, an epidemic of loneliness, or the society-wide erosion of deep thinking—but Carr argues it’s the only way to preserve these capacities in yourself. Carr explains that we’re complicit in creating and maintaining hyperreality because we actively choose the simulation, and companies profit by providing it. This means the problem can’t be solved by making companies behave better. The system is too embedded, our habituation too complete, and our desires too aligned with what it provides. Society-level change would require most people to want something different—and Carr sees little evidence that’s happening.
Since collective action won’t work, Carr argues each person must decide whether to accept the terms of hyperreality or to position themselves at its margins. This means choosing to engage with the physical world frequently and carefully enough that you’re reminded reality exists independent of your perceptions and preferences, and that the material world pushes back against your desires. In practical terms, this means prioritizing physical experiences over digital representations: having conversations where you look at people’s faces and read their expressions, taking walks without your phone so you notice your surroundings, reading books that require sustained attention, and allowing yourself to experience boredom and solitude.
Carr doesn’t pretend that resisting hyperreality is easy. Opting out means losing connection to the social networks that organize much of modern life—you’ll miss references, feel excluded from conversations, and sometimes lose touch with what’s happening in society. Market forces, peer pressure, and your own instincts will pull you back toward hyperreality. Even if you succeed in positioning yourself at the margins, it won’t solve the broader social problems, but Carr insists it matters anyway because of what’s at stake for you as an individual. The qualities that make us most human—empathy, depth of thought, and the ability to create meaning—require us to be grounded in physical reality.
Carr concludes that you can preserve your capacity for deep, embodied thought even if society as a whole continues down a different path. The choice is whether to accept the rules embedded in systems designed to exploit your weaknesses, or to construct a life deliberately oriented toward the physical world that exists beyond the screen.
