PDF Summary:Right Kind of Wrong, by

Book Summary: Learn the key points in minutes.

Below is a preview of the Shortform book summary of Right Kind of Wrong by Amy Edmondson. Read the full comprehensive summary at Shortform.

1-Page PDF Summary of Right Kind of Wrong

Many people view failure as shameful or painful and try to avoid it at all costs. However, Amy Edmondson’s Right Kind of Wrong argues that failures have been crucial to scientific discoveries, technological advancements, and individual achievements—because people were willing to take risks and learn from their missteps. With the right mindset and tools, you too can transform defeats and shortcomings into innovations and personal growth.

This guide will start by discussing the importance of creating an environment where it’s safe to fail. We’ll then go over Edmondson’s process for studying and learning from past failures to achieve better outcomes in the future. Our commentary will build upon Edmondson’s principles by comparing them to those of other self-improvement experts. We’ll also make connections to the world of business leadership to explain why these ideas can be effective in organizational settings. Finally, we’ll provide some ideas to help you put Edmondson’s principles into practice.

(continued)...

In Humanocracy, authors Gary Hamel and Michele Zanini say that traditional business practices rely on bureaucracy and micromanagement to keep workers compliant, thereby minimizing failures. However, they argue that this approach stifles employees’ potential and therefore harms the company in the long run. Instead, they urge you to give employees autonomy and accountability, which empowers them to discover their full potential as members of your organization.

In other words, the authors say that you should give workers the freedom to experiment with new ways of doing things, pursue exciting new ideas, and discover what unique innovations they can bring to the organization. However, that freedom comes with a requirement to take responsibility for the outcomes of their work, whether positive or negative. While you can celebrate an employee who has a personal success (or a good failure), workers must also be prepared to acknowledge when one of their ideas goes badly and to accept responsibility for it.

The Importance of Context

Edmondson’s discussion of creating an environment where it’s safe to fail comes with one crucial caveat: Sometimes it’s not safe to fail. Therefore, it’s important to evaluate the context you’re working in. This will help you determine the appropriate level of caution for your situation so that you can avoid genuinely harmful mistakes and encourage productive, educational failures. The author urges you to consider two factors relevant to your situation—the level of unpredictability and the level of risk—and find the appropriate balance between caution and experimentation based on your analysis.

Context Factor #1: The Level of Unpredictability

The first factor to consider is the level of unpredictability in what you’re doing. Edmondson presents a range of levels of uncertainty:

  • Stable contexts are situations with well-established procedures and predictable outcomes, like following a recipe or completing simple, routine tasks.
  • Volatile contexts are familiar situations, but they still require adaptation and flexibility. Examples include teaching a class and practicing medicine.
  • New contexts are situations that don’t have established playbooks or best practices yet. Experimentation is absolutely needed in such contexts, so failure should be an expected part of learning and making progress.

Naturally, the likelihood of failure increases with the level of unpredictability. You’re very unlikely to encounter failure in stable situations, and if you do, it’s almost always the result of human error or negligence. At the opposite extreme, failure is to be expected in brand-new situations, because trial and error is the only way to navigate them.

Volatility and Innovation in Business

When discussing unpredictability, the opposite side of the proverbial coin is innovation: The more unpredictable or unstable a situation is, the more likely it is that you’ll need new ideas and discoveries to navigate it. Researcher and business consultant Jim Collins discusses this idea in Great By Choice, where he argues that companies often succeed or fail depending on how closely they stick to their innovation threshold. This is Collins’s term for the “correct” amount of innovation—enough to stay relevant in a changing market, but not so much that the company takes unnecessary risks.

However, an organization’s innovation threshold changes depending on what it does, echoing Edmondson’s discussion of context:

Stable contexts have low innovation thresholds. This means that innovation in these fields should be minimal and focused on incrementally improving quality or efficiency. This is common in industries where people’s safety could be at stake, such as transportation—you probably wouldn’t want your bus driver trying “innovative” routes and driving techniques while you’re a passenger.

Volatile contexts have moderate innovation thresholds. The most successful organizations in these contexts will balance existing knowledge with calculated risks, such as how the Volkswagen Beetle succeeded by combining high product quality with an unusual, eye-catching design.

New contexts have high innovation thresholds. In these fields, obsolescence is a constant threat, and innovation is imperative for survival. This is common in tech-related industries like software development and biotechnology.

Context Factor #2: The Level of Risk

The second context factor to consider is the level of risk involved in what you’re doing. In other words, what might failure cost in terms of money, reputation, or people’s well-being?

In low-risk contexts—situations where failure might only result in inconvenience or mild embarrassment—Edmondson encourages you to take an experimental approach. Trying new things when there’s not much at stake is an effective way to make new discoveries, learn from mistakes, and practice skills you don’t normally get to use. For instance, this is what many training programs do—let people practice skills and test solutions in a low-stakes environment. Someone learning CPR starts by practicing on a dummy so there’s no risk of injuring someone if they do the technique incorrectly. The trainee is free to experiment with various angles and levels of force until they find the method that works for them.

(Shortform note: When striving for innovation, entrepreneurs and designers will often intentionally create low-risk situations in order to test their ideas without putting too much time or money on the line. They do this by creating “experiments” to determine a product’s desirability (whether people will want it), feasibility (whether the product can actually be made), and viability (whether the product can be made efficiently and affordably). For example, game designers frequently launch crowdfunding campaigns on Kickstarter or similar sites as a low-risk way to test their games’ desirability—if the campaign is successful, they know that the product has enough desirability to be worth further investment.)

Conversely, Edmondson says that you should always approach high-risk contexts with vigilance and caution, no matter how unlikely failure is. In a situation where people’s reputations, livelihoods, or lives might be on the line, the consequences of failure are too severe to risk. For example, commercial airliners have numerous, redundant safety features and advanced autopilot systems that can nearly fly themselves. Even so, regulations require there to be at least two pilots in the cockpit at all times. This might seem overly cautious and inefficient, but if anything does go wrong, hundreds of passengers’ lives could be in danger. Therefore, the risk level is too high to leave anything to chance.

(Shortform note: Edmondson urges you to approach high-risk situations carefully, taking all possible precautions. In Antifragile, risk analyst Nassim Nicholas Taleb goes one step further by saying you should start with the assumption that the worst will eventually happen. This means that in addition to making a failure as unlikely as possible, you also work to minimize the harm in case that failure does happen. Continuing with the airplane example, assuming the worst requires installing safety measures that give passengers the best possible chance to survive a plane crash, such as oxygen masks and floatation devices.)

Study Your Failures to Learn From Them

Now that you’ve created an environment—personal, organizational, or both—where it’s safe to fail, you’re ready to start turning those setbacks into the building blocks of future successes.

Edmondson’s process for learning from failure involves much more than just asking what went wrong and what you can do better next time. She says you must carefully study your failures, gather specific details about them, and develop a deep understanding of what went wrong. Only then will you be ready to decide how to respond to the situation and learn how to avoid similar problems in the future.

In this section we’ll go over the types of failures Edmondson identifies: intelligent failures, simple failures, and complex failures. Next, we’ll share Edmondson’s principle that all failures exist on a spectrum ranging from deserving blame to deserving praise. Finally, we’ll discuss common psychological barriers that prevent people from learning from their mistakes.

Types of Failure

Edmondson divides failures into three categories, each with its own qualities and corrective methods:

Type #1: Intelligent Failures

The first category is what the author calls intelligent failures—those that provide meaningful insights without major losses. They’re common in settings where the entire point is to experiment and learn, such as scientific laboratories and R&D departments. As such, this type of “failure” is more or less expected and usually doesn’t require any special corrective action—the people involved were already prepared to fail, learn from the experience, and try again.

For example, if a scientific experiment doesn’t produce the expected results, it means that the researchers got something wrong, such as a flaw in either their hypothesis or their methodology. Finding and correcting that mistake will either improve their understanding of the research topic (if it was a problem with the hypothesis) or their skills at designing and conducting experiments (if it was a problem with the methodology).

The Right to Fail Is a Privilege

Edmondson writes that intelligent failures are an important (and expected) part of progress and innovation. However, in an article she wrote for Time magazine, she also acknowledges that failure is a privilege not everyone has. Citing the work of several different researchers, Edmondson argues that underprivileged groups like women and minorities face unfair and unreasonable pressure to succeed because they often have to overcome stereotypes about their competence and work ethic to be treated with the same respect their colleagues receive by default.

Furthermore, Edmondson says that a privileged person’s failures will often be viewed as isolated events, and perhaps—as she advocates for in Right Kind of Wrong—valuable learning opportunities. Conversely, when someone from an underprivileged group makes a mistake or encounters a setback, people tend to view it as an indictment of their entire demographic. Therefore, women and minorities feel additional pressure because their personal failures could unfairly reflect on others like them.

Type #2: Simple Failures

The second category is what Edmondson calls simple failures—those with a single point of failure, usually (though not always) due to a preventable mistake or oversight. As such, this type of failure is the most likely to require some sort of punishment or corrective action.

Simple failures are common in predictable contexts with well-established procedures, such as factories and other repetitive jobs, because people are likely to get bored and overconfident in such settings. For example, if a factory worker gets injured because they weren’t following proper safety procedures, it’s clear that the point of failure was the employee’s own negligence. It will then be up to the supervisor or manager to determine whether any further punishment is warranted, such as giving the employee a formal reprimand or firing them.

People Seek Simple Answers Where There May Not Be Any

Edmondson says that simple failures are the only type that are generally worthy of punishment; later in the guide, we’ll also discuss her assertion that people tend to respond to failures by looking for someone to blame and punish. These two points together suggest that we tend to think of all failures as simple: We assume there was a single point of failure, and it was somebody’s fault. However, before assigning blame, perhaps you should ask if you’re just trying to find a simple solution to what’s really a complex problem.

Our tendency to look for simple explanations and simple solutions goes back to our ancient ancestors. As historian Yuval Noah Harari explains in 21 Lessons for the 21st Century, prehistoric humans lived in small, relatively isolated tribes, so we evolved to understand simple, small-scale problems like finding food or handling conflicts between small groups. Even today, we struggle to grasp large, complex issues. As a result, people tend to take mental shortcuts, such as looking for someone to take the blame for a problem that might have had many different contributing factors.

Type #3: Complex Failures

The third category is complex failures—situations where multiple factors interact to create a problem. Edmondson says that complex failures are almost never worthy of blame because they’re nearly impossible to predict. In a complex failure, even very skilled and knowledgeable people can’t foresee the specific factors that cause it interacting in the specific way that triggered the underlying problem. Furthermore, there’s usually at least one uncontrollable factor involved in complex failures, such as the weather. However, due to that very complexity and unpredictability, such failures can serve as valuable learning opportunities.

The financial crisis of 2007-2008 is a notable example of a complex failure. A combination of many factors, including deregulation in the US financial sector, reduced interest rates for mortgages and other loans, and overconfidence on the part of bank executives and government officials, came together to create a “housing bubble” in which home prices increased well beyond their actual value. When the bubble “popped,” the demand for homes declined, and market prices corrected themselves accordingly—banks, lenders, and hedge funds lost hundreds of billions of dollars as their loans and assets suddenly became worthless. Furthermore, the shock to the US economy cost millions of people their jobs and homes.

Attempts to learn from these mistakes and prevent another financial crisis led to the US government passing measures like the 2010 Dodd-Frank Act, which tightened banking regulations and created the Consumer Financial Protection Bureau.

Plan for Complex Failures Using the Barbell Method

Nassim Nicholas Taleb also discusses the idea of unpredictable events resulting from unforeseen interactions between numerous factors. He refers to things like the 2007-2008 financial crisis as Black Swan events: rare occurrences with unusually large impacts. While Edmondson’s primary concern is with what we can learn from such events after the fact, Taleb discusses various methods of preparing for them to minimize the harm they cause.

One method Taleb proposes is the Barbell Strategy, which evokes the image of loading a barbell with weights at both ends and avoiding the middle. This is an investment approach that balances investments between ultra-safe assets like treasury bonds or cash, and high-risk, high-reward opportunities like investing in startups. This approach protects you against negative Black Swan events because you’ve made sure that most of your assets are safe. At the same time, you can still benefit from positive Black Swan events because of the resources you’ve put into more volatile opportunities.

If the US financial sector had followed the Barbell Method, the 2007-2008 financial crisis might never have happened. With most of the banks’ assets protected in safe investments, the economic shock from the housing market would have been much less severe: People would only have lost a little of their money instead of nearly all of it.

While Taleb developed the Barbell Strategy for investments, the basic principle—protecting what you need with very safe decisions, while gambling what you can afford to lose on volatile opportunities—can be applied much more broadly. For example, people commonly hold down a stable job while working on entrepreneurial side projects.

Consider the Whole Range of Failures

We’ve discussed different types of failures, but it’s likely that you still think of success and failure as a simple binary: You either succeeded or you failed. Edmondson urges you to try thinking of failure as a spectrum instead.

At one end of this range are failures that deserve blame. These tend to be problems resulting from negligence or protocol violations, so they require immediate corrective action—such as punishment or additional training—to ensure they don’t happen again. However, the author argues that very few failures (perhaps one out of a hundred) exist at this end of the spectrum.

At the opposite end are failures that deserve praise. These are situations where people planned as well as they could have, took every reasonable precaution, and still encountered a setback. Edmondson says that failures of this sort are worthy of praise because they teach valuable lessons—there was an issue that you didn’t foresee and couldn’t have prepared for, and now you know about it.

Note that the vast majority of problems and setbacks will fall somewhere between those two extremes. Most of the time, any given failure is due to a combination of human error and unforeseen circumstances. Therefore, that failure is neither completely blameworthy nor completely praiseworthy, and it will be up to you to decide how best to proceed.

The author adds that this principle is crucial because, far too often, people respond to failure by looking for someone to blame and punish. However, since most failures aren’t truly worthy of blame, this approach is generally both unfair and unproductive. Far from correcting problems and preventing future ones, our eagerness to blame creates a culture of fear wherein people hide their failures to avoid punishment. In doing so, those people deprive themselves and others of valuable learning opportunities.

(Shortform note: Some psychologists believe that we embrace binary thinking (or “splitting”) over spectrum-based thinking as an emotional defense mechanism. In simple terms, people find it uncomfortable to have conflicting feelings about a single person, thing, or event. To avoid internal conflict, we split complex ideas into simple, opposing categories. In the context of a failure, we’d rather consider it either blameworthy or praiseworthy, instead of recognizing that it has elements of both. Therefore, embracing Edmondson’s idea of a failure spectrum may be difficult and uncomfortable, but doing so will allow you to better understand why any given failure happens and the best way to move forward from it.)

Help Is More Effective Than Punishment

Research into prisons and prisoner reoffense rates supports Edmondson’s argument that people are too eager to blame and punish, and that such an approach usually isn’t effective. Studies have almost universally found that rehabilitating prisoners leads to much lower recidivism rates than just punishing them. In fact, some studies have found that punishment-focused approaches could be counterproductive, actually increasing people’s chances of reoffending.

This is because many people turn to crime only when they believe they have no other option; it’s the only way for them to survive in the face of poverty and limited opportunities. Therefore, giving them the skills and resources to support themselves in the future is more effective than trying to use punishment as a deterrent—they’d rather risk going back to prison than starve.

Applying this idea more broadly, finding someone to punish is usually not an effective way to correct or prevent failures, as Edmondson says. Instead, it’s almost always better to determine why that failure happened and make sure the people involved have the skills and resources they need to stop it from happening again.

Want to learn the rest of Right Kind of Wrong in 21 minutes?

Unlock the full book summary of Right Kind of Wrong by signing up for Shortform.

Shortform summaries help you learn 10x faster by:

  • Being 100% comprehensive: you learn the most important points in the book
  • Cutting out the fluff: you don't spend your time wondering what the author's point is.
  • Interactive exercises: apply the book's ideas to your own life with our educators' guidance.

Here's a preview of the rest of Shortform's Right Kind of Wrong PDF summary:

What Our Readers Say

This is the best summary of Right Kind of Wrong I've ever read. I learned all the main points in just 20 minutes.

Learn more about our summaries →

Why are Shortform Summaries the Best?

We're the most efficient way to learn the most useful ideas from a book.

Cuts Out the Fluff

Ever feel a book rambles on, giving anecdotes that aren't useful? Often get frustrated by an author who doesn't get to the point?

We cut out the fluff, keeping only the most useful examples and ideas. We also re-organize books for clarity, putting the most important principles first, so you can learn faster.

Always Comprehensive

Other summaries give you just a highlight of some of the ideas in a book. We find these too vague to be satisfying.

At Shortform, we want to cover every point worth knowing in the book. Learn nuances, key examples, and critical details on how to apply the ideas.

3 Different Levels of Detail

You want different levels of detail at different times. That's why every book is summarized in three lengths:

1) Paragraph to get the gist
2) 1-page summary, to get the main takeaways
3) Full comprehensive summary and analysis, containing every useful point and example