Reading List: 7 Books That Help You Make Sense of AI

by Shortform Explainers

AI literacy isn’t optional anymore. These 7 books reveal how large language models actually work, why they’re not magic, and how to think critically about the technology transforming everything around us.

Reading List: 7 Books That Help You Make Sense of AI

This is a preview of the Shortform article Reading List: 7 Books That Help You Make Sense of AI

This is a preview of the Shortform article, sign up to access the whole article.

Introduction: Beyond the Hype and Fear

Artificial intelligence has become impossible to ignore, yet most of us struggle to understand what’s really happening behind the attractive interfaces of ChatGPT, Claude, and the other AI tools reshaping our lives. The public conversation swings between breathless predictions of AI solving all human problems and apocalyptic warnings about humanity’s obsolescence—leaving little room for nuanced understanding.

The reality is both more complex and more urgent than either extreme suggests. Many people fundamentally misunderstand how AI works, creating confusion affecting how we use these tools at work, how we teach our children, and how we navigate an increasingly AI-mediated world.

These seven books offer different pathways into the same essential project: developing “AI literacy”—the ability to understand what these systems actually do, where they excel, where they fail, and how to think critically about their role in society. Together, they provide both the technical foundation and critical perspective needed to engage thoughtfully with AI rather than being swept into the deep end of either hype or fear.

Artificial Intelligence by Melanie Mitchell

Melanie Mitchell contends in Artificial Intelligence: A Guide for Thinking Humans that AI systems are far more limited than their impressive performances suggest—and our tendency to overestimate AI’s capabilities while underestimating the complexity of human intelligence creates dangerous blind spots. Mitchell, a computer scientist at Portland State University and the Santa Fe Institute who has spent decades studying AI, systematically examines what current AI can and cannot do, revealing a gap between public perception and reality. The book offers a clear-eyed assessment of AI’s “barrier of meaning,” the problem that even the most sophisticated neural networks process information without understanding it the way humans do.

Mitchell shows how systems that excel at narrow tasks like playing Go or recognizing faces fail catastrophically when faced with slightly altered conditions, because they lack the common-sense knowledge that humans take for granted. This isn’t a temporary limitation but a fundamental issue: Current AI approaches mistake statistical pattern matching for genuine comprehension, leaving them vulnerable to errors that reveal their shallow understanding of the world. Mitchell helps readers evaluate AI developments skeptically rather than being swept up in either utopian promises or dystopian fears.

The Alignment Problem by Brian Christian

In The Alignment Problem, computer scientist and technology writer Brian Christian tackles one of the most pressing questions of our time: How do we ensure that increasingly powerful AI systems do what we actually want them to do? The “alignment problem” refers to the dangerous gap between what we program machines to optimize for and what we really desire them to do. Christian traces how this challenge spans from today’s biased hiring algorithms to tomorrow’s potential superintelligence, weaving together stories from the researchers working to solve these problems.

Through accessible explanations of AI techniques like reinforcement learning, inverse reinforcement learning, and interpretability research, the book shows both the remarkable progress being made and the fundamental difficulties that remain. Rather than offering easy answers, Christian reveals why building AI that truly serves human values requires grappling with deep questions about what we want, how we learn, and what it means to be human.

Prediction Machines by Ajay Agrawal, Joshua Gans, and Avi Goldfarb

In Prediction Machines, three economists cut through the AI mystique with a deceptively simple insight: Artificial intelligence is fundamentally about making prediction cheaper. Rather than viewing AI as magical thinking machines, Agrawal, Gans, and Goldfarb argue that we should understand it as a technology that dramatically reduces the cost of generating predictions from existing data—whether that’s predicting what a human driver would do in traffic, what words should come next in a sentence, or which products a customer might want to buy.

This economic reframing has profound implications. When any input becomes drastically cheaper, we use more of it and find new applications for it. Just as cheap arithmetic transformed photography from a chemistry problem to a digital one, cheap prediction is transforming problems we never thought of as prediction problems—like driving—into prediction challenges. The authors also identify what becomes more valuable when prediction is cheap: Human judgment, data quality, and decision-making skills all increase in importance as complements to machine prediction. By focusing on economic fundamentals, this book provides a practical framework for understanding what skills will matter most in an AI-driven economy.

Generative Deep Learning by David Foster

In Generative Deep Learning, data scientist David Foster offers a technical guide to how machines create rather than simply recognize content, bridging the gap between cutting-edge research and practical implementation. The book’s premise is that generative modeling represents a fundamental shift from traditional machine learning approaches. While most AI systems excel at classification—determining whether an image contains a dog or cat—generative models work in reverse, creating entirely new images, text, music, or other content from scratch. Foster argues that understanding these creative AI systems requires grasping how they model probability distributions and navigate high-dimensional spaces to find realistic outputs.

If you want to take a hands-on approach to understanding generative AI, this book is for you: Foster walks readers through building everything from variational autoencoders to advanced GANs without requiring expensive hardware. Foster covers the mathematical foundations, including concepts like latent spaces, attention mechanisms, and diffusion processes. Rather than treating the technical mechanics and the creative possibilities of generative AI as separate domains, he shows how understanding their architecture—knowing how models encode information, navigate latent spaces, and decode outputs—directly informs your understanding of their creative potential.

Co-Intelligence by Ethan Mollick

In Co-Intelligence, Wharton professor Ethan Mollick contends that rather than viewing AI as either humanity’s savior or destroyer, we should learn to work alongside these powerful but imperfect systems. His central insight is that the most effective way to use AI is to treat it like a person—but one with very specific capabilities and blind spots that you need to learn through experimentation. Mollick, who leads Wharton’s Generative AI Labs, argues that we’re facing what he calls a “jagged frontier” of AI capabilities. These systems can write sophisticated poetry but struggle to count words accurately, which makes them fundamentally different from traditional software.

His book centers on four rules: Always invite AI to collaborate, maintain human oversight, assign AI clear personas to guide its responses, and remember that today’s AI represents the worst you’ll ever use as the technology rapidly improves. Mollick places an emphasis on hands-on learning: He recommends putting in at least 10 hours of experimentation to understand AI’s strengths and limitations in your specific context. He demonstrates how these tools can serve as virtual cofounders for entrepreneurs, tutors for students, and creative collaborators for anyone willing to learn their quirks.

The AI Con by Emily M. Bender and Alex Hanna

In The AI Con, linguist Emily M. Bender and sociologist Alex Hanna argue that what we call “artificial intelligence” is a marketing con designed to extract wealth from human creativity and labor. Drawing on the foundation built on their popular podcast Mystery AI Hype Theater 3000, they reveal how large language models like ChatGPT work not through understanding but through sophisticated pattern-matching—and amount to what they call “synthetic text-extruding machines” that create papier-mâché from vast databases of human writing.

The authors contend that AI hype serves as cover for familiar corporate strategies: automating jobs to reduce labor costs, replacing quality services with cheaper alternatives, and using the promise of technological salvation to justify harmful policies. They also note that the same systems that supposedly herald a new era of machine intelligence rely heavily on the exploitation of human labor, from content moderators sorting through traumatic material to data workers whose contributions remain invisible behind claims of “full automation.” They argue that rather than delivering promised productivity gains, these technologies create more work while degrading the quality of human services and relationships.

AI 2041 by Kai-Fu Lee and Chen Qiufan

AI 2041 pioneers what its authors call “scientific fiction”—stories grounded in technologies likely to exist within 20 years rather than impossible sci-fi concepts. Kai-Fu Lee, former president of Google China and current CEO of Sinovation Ventures, partnered with Chinese science fiction writer Chen Qiufan to create a unique format: Each of 10 stories explores a different AI application, followed by Lee’s analysis of the real technology behind it. The book tackles scenarios ranging from AI-powered insurance systems that monitor every aspect of daily life to deepfake technologies used in political manipulation, autonomous vehicles navigating mixed traffic, and quantum computing threatening global financial systems.

This collection is committed to plausibility—Lee estimates an 80 percent likelihood that these technologies will emerge by 2041. However, while Lee presents AI as inherently neutral technology that only becomes problematic through human misuse, critics argue this framing obscures how bias and corporate interests are embedded in AI development itself. The collaboration between a venture capitalist with significant AI investments and a science fiction writer raises questions about whether this represents genuine speculation—or sophisticated marketing for emerging technologies.

Read the full article on Shortform

Subscribed users get access to the full article and related content.
Start your free trial today