PDF Summary:Thinking in Systems, by Donella H. Meadows
Book Summary: Learn the key points in minutes.
Below is a preview of the Shortform book summary of Thinking in Systems by Donella H. Meadows. Read the full comprehensive summary at Shortform.
1-Page PDF Summary of Thinking in Systems
Thinking in Systems is an introduction to systems analysis. Many aspects of the world operate as complicated systems, rather than simple cause-effect relationships. Understanding how systems work is key to understanding why they tend to produce problems that are stubbornly resistant to improvement.
This book teaches you how to start viewing the world in terms of systems, why we tend to misunderstand complex systems, and how to intervene most effectively in systems. Learn the systems way to view problems as diverse as the war on drugs, harvesting renewable resources, and business monopolies.
(continued)...
- If they are equal, the population will stay the same.
Different circumstances can drive the relative strength of the birth or death loop:
- Data suggests that as countries get wealthier, birth rates fall. Therefore, poorer countries with high current birth rates may not retain high birth rates as their economies develop.
- A lethal, contagious disease could drastically increase the death rate. For instance, during the HIV/AIDS epidemic, projections of populations in areas with high HIV prevalence had to account for higher mortality.
- Birth rate could also fall due to social factors, such as lower interest in raising children or fertility issues.
More Complicated Systems
Read the full summary to learn how:
- One stock + two balancing loops represents a thermostat keeping a room’s temperature
- Delays introduce oscillations into system behavior, as in a car sales manager trying to keep her inventory consistent
- How to model extraction of a non-renewable resource, such as fossil fuels
- How to model extraction of a renewable resource, such as fish in the sea
Why Systems Perform Well
Systems are capable of accomplishing their purposes remarkably well. They can persist for long periods without any particular oversight, and they can survive changes in the environment remarkably well. Why is that?
Strong systems have three properties:
- Resilience: the ability to bounce back after being stressed
- Self-organization: the ability to make itself more complex
- Hierarchy: the arrangement of a system into layers of systems and subsystems
Creating systems that ignore these three properties leads to brittleness, causing systems to fail under changing circumstances.
Resilience
Think of resilience as the range of conditions in which a system can perform normally. The wider the range of conditions, the more resilient the system. For example, the human body avoids disease by foreign agents, repairs itself after injury, and survives in a wide range of temperatures and food conditions.
The stability of resilience comes from feedback loops that can exist at different layers of abstraction:
- There are feedback loops at the baseline level that restore a system. To increase resilience, there may be multiple feedback loops that serve redundant purposes and can substitute for one another. They may operate through different mechanisms and different time scales.
- Above the baseline loops, there are feedback loops that restore other feedback loops—consider these meta-feedback loops.
- Even further, there are meta-meta feedback loops that create better meta-loops and feedback loops.
At times, we design systems for goals other than resilience. Commonly, we optimize for productivity or efficiency and eliminate feedback loops that seem unnecessary or costly. This can make the system very brittle—it narrows the range of conditions in which the system can operate normally. Minor perturbations can knock the system out of balance.
Self-Organization
Self-organization means that the system is able to make itself more complex. This is useful because the system can diversify, adapt, and improve itself.
Our world’s biology is a self-organizing system. Billions of years ago, a soup of chemicals in water formed a cellular organism, which then formed multicellular organisms, and eventually into thinking, talking humans.
Some organizations quash self-organization, possibly because they optimize toward performance and seek homogeneity, or because they’re afraid of threats to stability. This can explain why some companies reduce their workforces to machines that follow basic instructions and suppress disagreement.
Suppressing self-organization can weaken the resilience of a system and prevent it from adapting to new situations.
Hierarchy
In a hierarchy, subsystems are grouped under a larger system. For example:
- The individual cells in your body are subsystems of the larger system, an organ.
- The organs are in turn subsystems of the larger system of your body.
- You, in turn, are a subsystem of the larger systems of your family, your company, and your community, and so on.
In an efficient hierarchy, the subsystems work well more or less independently, while serving the needs of the larger system. The larger system’s role is to coordinate between the subsystems and help the subsystems perform better.
The arrangement of a complex system into a hierarchy improves efficiency. Each subsystem can take care of itself internally, without needing heavy coordination with other subsystems or the larger system.
Problems can result at both the subsystem or larger system level:
- If the subsystem optimizes for itself and neglects the larger system, the whole system can fail. For example, a single cell in a body can turn cancerous, optimizing for its own growth at the expense of the larger human system.
- The larger system’s role is to help the subsystems work better, and to coordinate work between them. If the larger system exerts too much control, it can suppress self-organization and efficiency.
How We Fail in Systems
We try to understand systems to predict their behavior and know how best to change them. However, we’re often surprised by how differently a system behaves than we expected.
At the core of this confusion is our limitation in comprehension. Our brains prefer simplicity and can only handle so much complexity. We also tend to think in simple cause-effect terms, and in shorter timelines, that prevent us from seeing the full ramifications of our interventions.
These limitations prevent us from seeing things as they really are. They prevent us from designing systems that function robustly, and from intervening in systems in productive ways.
Systems with similar structures tend to have similar archetypes of problems. We’ll explore two examples of these; the full summary includes more.
Escalation
Also known as: Keeping up with the Joneses, arms race
Two or more competitors have individual stocks. Each competitor wants the biggest stock of all. If a competitor falls behind, they try hard to catch up and be the new winner.
This is a reinforcing loop—the higher one stock gets, the higher all the other stocks aim to get, and so on. It can continue at great cost to all competitors until one or more parties bows out or collapses.
A historical example was the Cold War, where the Soviet Union and the United States monitored each others’ munitions and pushed to amass the larger arsenal, at trillions of dollars of expense. A more pedestrian example includes how advertising between competitors can get increasingly prevalent and obnoxious, to try to gain more attention.
Fixing Escalation
The solution is to dampen the feedback wherein competitors are responding to each others’ behaviors.
One approach is to negotiate a mutual stop between competitors. Even though the parties might not be happy about it or may distrust each others’ intentions, a successful agreement can limit the escalation and bring back balancing feedback loops that prevent runaway behavior.
If a negotiation isn’t possible, then the solution is to stop playing the escalation game. The other actors are responding to your behavior. If you deliberately keep a lower stock than the other competitors, they will be content and will stop escalating. This does require you to be able to weather the stock advantage they have over you.
Addiction
Also known as: dependence, shifting the burden to the intervenor
An actor in a system has a problem. In isolation, the actor would need to solve the problem herself. However, a well-meaning intervenor gives the actor a helping hand, alleviating the problem with an intervention.
This in itself isn’t bad, but in addiction, the intervenor helps in such a way that it weakens the ability of the actor to solve the problem herself. Maybe the intervention stifles the development of the actor’s abilities, or it solves a surface-level symptom rather than the root problem.
The problem might appear fixed temporarily, but soon enough, the problem appears again, and in an even more serious form, since the actor is now less capable of solving the problem. The intervenor has to step in and help again to a greater degree. Thus the reinforcing feedback loop is set up—more intervention is required, which in turn further weakens the actor’s ability to solve it, which in turn requires more intervention. Over time, the actor becomes totally dependent on—addicted to—the intervention.
An example is elder care in Western societies: families used to take care of their parents, until nursing homes and social security came along to relieve the burden. In response, people became dependent on these resources and became unable to care for their parents—they bought smaller homes and lost the skills and desire to care.
Fixing Addiction
When you intervene in a system:
- Try to first diagnose the root cause of the issue. Why is the system unable to take care of itself?
- Then design an intervention that will solve the root cause, and that won’t weaken the system’s ability to take care of itself.
- After you intervene, plan to remove yourself from the system promptly.
More System Problems
Read the full summary to learn more common system problems:
- Policy resistance, where a policy seems to have little effect on the system because the actors resist its influence. Example: The war on drugs.
- The rich get richer, where the winner gets a greater share of limited resources and progressively outcompetes the loser. Example: monopolies in the marketplace.
- Drift to low performance, where a performance standard depends on previous performance, instead of having absolute standards. This can cause a vicious cycle of ever-worsening standards. Example: a business loses market share, each time believing, “well, it’s not that much worse than last year.”
Improving as a Systems Thinker
Learning to think in systems is a lifelong process. The world is so endlessly complex that there is always something new to learn. Once you think you have a good handle on a system, it behaves in ways that surprise you and require you to revise your model.
And even if you understand a system well and believe you know what should be changed, actually implementing the change is a whole other challenge.
Here’s guidance on how to become a better systems thinker:
- To understand a system, first watch to see how it behaves. Research its history—how did this system get here? Get data—chart important metrics over time, and tease out their relationships with each other
- Expand your boundaries. Think in both short and long timespans—how will the system behave 10 generations from now? Think across disciplines—to understand complex systems, you’ll need to understand fields as wide as psychology, economics, religion, and biology.
- Articulate your model. As you understand a system, put pen to paper and draw a system diagram. Put into place the system elements and show how they interconnect. Drawing your system diagram makes explicit your assumptions about the system and how it works.
- Expose this model to other credible people and invite their feedback. They will question your assumptions and push you to improve your understanding. You will have to admit your mistakes, redraw your model, and this trains your mental flexibility.
- Decide where to intervene. Most interventions fixate on tweaking mere numbers in the system structure (such as department budgets and national interest rates). There are much higher-leverage points to intervene, such as weakening the effect of reinforcing feedback loops, improving the system’s capacity for self-organization, or resetting the system’s goals.
- Probe your intervention to its deepest human layers. When probing a system and investigating why interventions don’t work, you may bring up deep questions of human existence. You might bemoan people in the system for being blind to obvious data, and if only they saw things as you did, the problem would be fixed instantly. But this raises deeper questions: How does anyone process the data they receive? How do people view the same data through very different cognitive filters?
Want to learn the rest of Thinking in Systems in 21 minutes?
Unlock the full book summary of Thinking in Systems by signing up for Shortform .
Shortform summaries help you learn 10x faster by:
- Being 100% comprehensive: you learn the most important points in the book
- Cutting out the fluff: you don't spend your time wondering what the author's point is.
- Interactive exercises: apply the book's ideas to your own life with our educators' guidance.
Here's a preview of the rest of Shortform's Thinking in Systems PDF summary:


