This article is an excerpt from the Shortform book guide to "The Design of Everyday Things" by Don Norman. Shortform has the world's best summaries and analyses of books you should be reading.
Like this article? Sign up for a free trial here.
What is human error? Should designers take the idea of human error into account?
The answer to the question “what is human error” is actually complicated. We tend to think of human error as our own mistakes, and blame ourselves. But some designs may actually be set up for us to fail.
Keep reading to find out the answer to the question “what is human error” and more.
What Is Human Error and Why Do We Blame Ourselves?
So what is human error? Faulty conceptual models often lead us to blame ourselves when an object doesn’t meet our expectations. If you push a door and it doesn’t open, you push again, but harder. You assume your action was flawed somehow—not that the door itself was poorly designed.
The tendency to blame ourselves when technology fails us is interesting because it is the exact opposite of our default pattern for assigning blame. Normally, when we perform poorly, we blame our environment (perhaps the sun was in our eyes, or the dog ate our homework). But when we perform well, we attribute it to our innate qualities, not the environment.
When we look at other people, this effect is reversed: we assume their successes are products of their environment, but their failures are due to their personal faults. (Shortform note: In psychology texts, this tendency is referred to as the “fundamental attribution error.” To learn more about attribution error and other cognitive biases, read our summary of Thinking, Fast and Slow.)
As new technologies pop up in every corner of our lives, we’re less and less likely to admit to struggling with them, especially when it appears that “everyone else understands this.” In reality, the opposite is true—when it comes to technology, our struggles are more likely due to design, not our own inadequacy. In other words, most people are probably experiencing the same difficulties, whether or not they speak up about it. This can help us understand the question “what is human error?”
The Value of Failure
So, why are we willing to take the blame for failed interactions with technology, but not for our failures in general? One possibility is learned helplessness: the belief that you are doomed to fail in a given situation because you’ve experienced similar failures in the past. A history of repeated failures with a specific experience makes us assume that success is impossible, so we may as well stop trying. So, an encounter with even one or two overly confusing pieces of technology can make us conclude that we’re just not good with technology in general.
We can reframe these experiences using positive psychology. Positive psychology is a subfield of psychology that focuses on people’s strengths and positive emotions instead of their struggles. In this case, positive psychology requires a perspective shift. Instead of seeing repeated failures as evidence that we’re simply not skilled enough, we can actively choose to see failures as learning experiences. For example, if we struggle with a confusing computer program, we might put in the effort to troubleshoot the problem and ultimately end up with a much more thorough understanding of the program than if we’d succeeded on the first try.
Scientists use this practice every day. When an experiment fails, they troubleshoot, find the problem, and try the experiment again. The failure isn’t a bad omen—provides important information that ultimately creates results.
Human Errors Are Really System Errors
People make mistakes. This is a universal truth. But what is human error? Is it the simple act of making mistakes? The technology around us often requires us to be perfect—to remember information accurately, never be distracted, and react in the same way every time.
In law, the idea of “human error” is accepted as a valid explanation for tragic outcomes. In reality, these errors are rarely “human,” but instead a fault of the system. If a piece of technology is designed without regard to human behavior and cognition, errors are practically guaranteed. Who is responsible for those errors?
Think back to the Three Mile Island incident from Chapter 1. One person misunderstanding an indicator light caused a massive nuclear incident. But why was the control system of a nuclear reactor set up in a way that made it possible for one small mistake to escalate into tragedy? The system was designed to be perfectly logical, but the design didn’t account for the real humans who would be operating it.
Norman recommends getting rid of the phrase “human error” altogether. Instead, we should think of interactions between person and machine the same way we think of interactions between people. When disagreements pop up, each person can clarify their intentions, propose solutions, and move on. The ideal system allows the user and the object to interact in the same way.
For example, some digital calendars allow you to enter dates with natural language. Instead of requiring dates to be entered in a single format, the user can type “August 3rd,” “8/3,” or “next Tuesday” and the event will be added to the proper date. The machine recognizes that humans sometimes phrase things differently, and is programmed to expect and accommodate that.
Now that you know the answer to the question “what is human error” you can consider it more carefully in designs.
———End of Preview———
Like what you just read? Read the rest of the world's best book summary and analysis of Don Norman's "The Design of Everyday Things" at Shortform.
Here's what you'll find in our full The Design of Everyday Things summary:
- How psychology plays a part in the design of objects you encounter daily
- Why pushing a door that was meant to be pulled isn't your fault
- How bad design leads to more human errors