PDF Summary:Seeking Wisdom, by Peter Bevelin
Book Summary: Learn the key points in minutes.
Below is a preview of the Shortform book summary of Seeking Wisdom by Peter Bevelin. Read the full comprehensive summary at Shortform.
1-Page PDF Summary of Seeking Wisdom
Have you ever reflected on one of your past mistakes and wondered, “What was I thinking?” In Seeking Wisdom, Peter Bevelin claims that we make significant errors because we tend to think irrationally. He argues that, fortunately, we can avoid major errors by following the wisdom of some of the world’s most rational thinkers. In this guide, we’ll explore Bevelin’s research on mistakes and present his advice on rational thinking. You’ll learn how to…
- Make wiser decisions by questioning your first impressions
- Prevent your emotions from interfering with your decisions
- Make moral choices—even if those around you don’t
- Base your decisions on the right evidence
- Better predict your choices’ outcomes
Throughout our guide, we’ll compare Bevelin’s ideas on rational thinking to those of other experts, such as Donella Meadows and James Clear. Furthermore, we’ll supplement Bevelin’s strategies with additional actionable steps, such as how to calm yourself down before making a big decision.
(continued)...
(Shortform note: In Brain Rules, biologist John Medina emphasizes the role that stress played in our ancestors’ responses to danger, and he clarifies why that stress reaction no longer serves us today. Medina claims that we’ve evolved to effectively manage acute stress (short-term stress that helps us respond to urgent threats). This is the type of stress our ancestors dealt with using quick emotional reactions. However, we haven’t evolved to handle chronic stress (long-term stress), a type of stress we experience often nowadays. Medina explains that chronic stress floods our brains with cortisol, a hormone that in high levels can cause memory loss and impair learning—two effects that may compromise our ability to make thoughtful decisions.)
Although we’re conditioned to engage in emotional decision-making, we can use rational thinking to prevent our emotions from compromising all of our decisions. Let’s explore two ways to do so.
A Solution: Keep Calm and Consult Pre-Established Guidelines
First, Bevelin claims that you should avoid making decisions under the influence of strong emotions. We make the best decisions when we take the time to rationally weigh our options. Any time you’re experiencing a strong emotion, hold off on decision-making. Focus on calming down first.
Strategies for Calming Down
What are some ways you can calm down before returning to an important decision? Experts claim that you can calm yourself down through actions that trigger your parasympathetic nervous system. This part of your nervous system controls your ability to relax. Here are several research-based strategies that calm you down by activating this system:
Immerse yourself in cold water. Take a cold shower or bath, or splash your face with icy water. Research reveals that immersing yourself in cold liquid can trigger a release of dopamine, one of the “feel-good hormones.” Studies also show that immersing your face in cold water slows down your heart rate and stimulates your parasympathetic nervous system, two physical reactions associated with a feeling of calmness.
Engage in deep breathing. One strategy is to breathe in for four seconds, then breathe out slowly for eight seconds. Not only do deep breathing techniques induce a state of relaxation, but they also help you think more clearly.
Relax your muscles. Tense muscles signal to your body that it’s stressed, which keeps it in a state of physical and emotional stress. You can disrupt this cycle by engaging in progressive muscle relaxation, a calming technique in which you tense and then relax your muscle groups one by one. Start with tensing and relaxing your hands by making and releasing a fist. Then, continue this technique as you work your way through the muscles of your arms, chest, legs, and feet.
Once you’re calm, Bevelin argues that you should consult pre-established guidelines: a list of rules or steps that instruct you on how to act. Pre-established guidelines counteract the influence of your emotional state by reminding you what actions to take or avoid. Make these guidelines in advance so that you can reference them any time you’re faced with a decision. Bevelin bases his ideas on pre-established guidelines on the wisdom of Charles Munger and Warren Buffett, both of whom use this rational thinking strategy.
For example, imagine you’re someone who often splurges on expensive purchases and regrets it later. Here’s a set of guidelines you could consult before making any spending decision:
- Don’t buy extra items just to reach the free shipping threshold for online purchases.
- Don’t buy a duplicate of an item you already own if the one you own serves its purpose.
- When you have several options for an important item to buy, opt for the second-least-expensive option.
Reinforce Your Pre-Established Guidelines by Resisting Your Temptations
Bevelin provides a detailed explanation of how and why to create pre-established guidelines, but he doesn’t offer guidance on how to ensure you actually follow these rules. When it comes time to make a decision, you may want to ignore your guidelines and give in to your temptations. To prevent this, follow these experts’ advice on resisting your temptations.
In Atomic Habits, James Clear claims that you’re more likely to resist temptations if you eliminate reminders of those temptations. For example, to help reinforce your spending guidelines, you could unsubscribe from your favorite stores’ marketing lists. That way, you won’t be reminded of what products you’re missing out on.
Furthermore, consider making your temptations inconvenient so you’re less likely to engage in them. For instance, delete your credit card information from your online shopping accounts. The inconvenience of re-entering this information may dissuade you from purchasing something you don’t need.
Irrational Error 3: Failing to Make the Moral Choice
Finally, Bevelin argues that we often make immoral choices because we irrationally base our decisions on what other people are doing rather than on what we think is right. One reason behind this failure is that we’re afraid having an unpopular opinion will invite criticism. For example, imagine one of your colleagues makes a sexist comment during a meeting. You avoid speaking up because you don’t want others to criticize you for being overly sensitive or disloyal.
(Shortform note: There’s a term for our tendency to conform to others’ expectations of us and make immoral choices: the banality of evil. It’s the idea that ordinary people can do extremely immoral things when there’s pressure to conform or to follow the orders of authority figures. Philosopher Hannah Arendt proposed this idea to explain why ordinary Germans became complicit in the atrocities of the Holocaust.)
The Evolutionary Origins of Our Failure to Make Moral Choices
Bevelin argues that throughout our evolutionary history, our habit of following others’ behavior helped ensure our survival. Cooperating with other people increased our chances of avoiding pain and seeking pleasure. Our hunter-gatherer ancestors hunted in groups, collaborated on building shelters, and treated each other’s wounds. These ancestors learned to strive for social acceptance since inclusion in the group had many benefits. They learned to fear social exclusion, as it would reduce their access to the benefits of group cooperation.
(Shortform note: While Bevelin emphasizes how following and cooperating with others increased our chances of survival, the opposite behavior—distrusting others—also has evolutionary value. In Talking to Strangers, Malcolm Gladwell notes that distrusting others’ intentions helps us notice when others are trying to take advantage of us. However, he adds that we nonetheless evolved the tendency to trust others by default because mutual trust and transparent communication aided our survival significantly more than constant skepticism did.)
A Solution: Recognize the Benefits of Making the Moral Choice
Although our evolutionary past hard-wired us to prioritize social inclusion over independent moral thinking, we can counteract this tendency with rational thinking. Bevelin offers a strategy from Warren Buffet that can motivate you to make the moral choice—even if it’s the unpopular choice. When you’re faced with a moral choice, ask yourself if you’d be able to live with a critical journalist writing a front-page story about your decision. For instance, consider if you’d be able to live with everyone you know reading this headline: “Worker Fails to Call Out Colleague for Sexist Comment, Claiming She ‘Hoped Someone Else Would Do It.’”
Make the Moral Choice by Leveraging Your Desire for Social Approval
Bevelin doesn’t explain why Buffet’s front page test works, but we can infer that this strategy leverages our ingrained desire for social approval, using it to compel us to do what’s right. This strategy expands our decision’s audience to anyone we imagine reading our front-page story. We may then fear their disapproval of our immoral actions and instead do something moral that they’d approve of.
Other experts also offer strategies that, like Buffett’s, utilize our natural desire for others’ approval to motivate us to make the moral choice. For instance, some experts recognize that we’re susceptible to following the commands of authority figures, and these figures can sometimes compel us to make immoral choices. You can overcome this tendency by changing your allegiances: Create distance between yourself and the figure of authority, and align yourself instead with victims. That way, you’re more likely to act morally to gain victims’ approval, rather than act immorally to gain the authority figure’s approval.
Origin 2: Failure to Understand and Apply Scientific Concepts
According to Bevelin, the second reason why we make irrational errors is that we fail to understand scientific concepts and apply those concepts to our decisions. In this section, we’ll share wisdom from math and physics that can help you think more rationally. We’ve organized these ideas into two themes: concepts that relate to interpreting evidence, and concepts that relate to systems thinking.
(Shortform note: Although Bevelin describes our failure to understand science as an origin of our errors that’s separate from our evolutionary mismatch, these two origins may be linked. In The Art of Thinking Clearly, Rolf Dobelli argues that our hunter-gatherer minds didn’t evolve to do complex math (which serves as the foundation for many scientific concepts as well). He explains that our ancestors mostly did linear math problems, such as simple addition and multiplication problems. By contrast, modern life requires us to engage in more complex, exponential math, such as understanding interest rates and making sense of statistics such as “crime in the city is rising at a rate of 5% per year.”)
Concepts Related to Interpreting Evidence
Bevelin argues that we often make irrational errors because we base our choices on the wrong evidence. In this section, we’ll share two concepts that can help us rationally select evidence, interpret it, and use it to inform our decisions.
Concept 1: Correlation Vs. Causation
First, Bevelin claims that we sometimes make unwise decisions because we assume that evidence shows causation when it really shows a correlation. When evidence reveals causation, it demonstrates that one variable causes another. When you base your decisions on causal evidence, you can feel confident that your decision will likely have a similar outcome. By contrast, a correlation is evidence that two variables are related. However, just because two variables are related doesn’t mean that one causes the other. When you base your decisions on correlative data, you may make a wrong choice or waste your efforts.
For example, imagine you have a business that sells hand-knit sweaters. During your company’s first summer, you notice your sales are low. In response, you decide to invest extra time and money in marketing. When your sales surge from October to January, you attribute this increase to your recent marketing efforts. You then make this plan: Any time you notice a dip in sales, you’ll increase your advertising.
However, this plan may be a waste of your time and energy: It’s possible that your advertising efforts and your increase in sales may only be correlated. Perhaps your surge in sales from October to January was due to the fact that people typically buy more sweaters during colder months, not due to your advertising efforts.
How to Distinguish Causal Data from Correlative Data
While Bevelin suggests that you should base your decisions on data that reveal causality, he doesn’t provide specific guidance on how to distinguish causal data from correlative data. In Cribsheet, economist Emily Oster provides additional guidance on how to improve your decisions by consulting research that reveals causation. (While Oster writes about improving parenting decisions specifically, her guidance is arguably applicable to any decision.)
According to Oster, you should base your decisions on results from randomized trials instead of results from observational studies. Let’s contrast these two types of studies and further explore Oster’s claim.
What’s a randomized trial? This is a type of experiment that reveals causation. In randomized trials, researchers randomly sort test subjects into groups, then they select one group to experience the variable under study. This process ensures that there are no other differences between the groups other than that variable. Therefore, these types of studies provide strong evidence that the variable studied causes the outcome that researchers observe.
What’s an observational study? By contrast, an observational study tends to reveal correlations, not causation. This type of study compares groups of people without experimenting on them. Because these studies don’t involve an experiment, researchers can’t conclude whether the outcomes result from the participants’ actions or from other differences in their lives. These types of studies don’t provide strong evidence that the variable studied causes the outcome that researchers observed.
How can you tell if a study is a trustworthy, randomized trial? According to experts, you should look for language (usually near the study’s introduction) that researchers randomly assigned participants to either an experimental group or a control group.
A Solution: Base Your Decisions on Evidence That Shows Causation
To base your decisions on evidence that reveals causation instead of correlations, Bevelin recommends that you look for strong evidence that one variable causes another. For example, you could consult with other knitting entrepreneurs and compare data on when your sales rose and fell. Imagine that their sales also rose in the autumn, whether or not they ramped up their advertising efforts during that period. This comparison would reveal that there’s more evidence that the weather—not your marketing efforts—caused your autumn sales to increase.
Generate Your Own Causal Data Using a Trial Run
Sometimes, it’s challenging to find strong evidence that one variable causes another. For instance, you may feel that you don’t have enough information to confidently conclude that the weather caused your sales to increase. Furthermore, it may be difficult to find research that relates to your situation. This lack of strong data may leave you feeling unsure of how to proceed with future decisions.
In these cases, consider running your own experiment in the form of a trial run: a brief test in which you examine the effect of a single variable. As Dan and Chip Heath explain in Decisive, conducting a trial run of an option you’re considering for a decision provides evidence of an option’s outcomes. This can help you make an informed, rational prediction about how your decision may pan out long term.
For example, one year you could run an advertising campaign in the fall, and the next year you could refrain from doing so. By comparing the two years’ sales, you’d gain some insight into whether the advertising campaign affected sales, and therefore whether you should invest in advertising during future fall seasons.
Concept 2: Case Evidence Vs. Representative Evidence
Second, Bevelin argues that we sometimes make poor decisions because we base our choices on case evidence. Case evidence is usually a single data point or anecdote that doesn’t necessarily represent the actual probability of an outcome. This type of evidence can be misleading because it gives you the impression that an outcome is more or less likely than it actually is. For example, imagine that you’re considering whether to invest in bitcoin. You hear from a colleague that her bitcoin investment is making significant returns. Her situation is an example of case evidence, and it may not reflect the actual probability of profiting from a bitcoin investment.
A Solution: Base Your Decisions on Representative Evidence
To avoid basing your decisions on case evidence, Bevelin claims that you should base your decisions on representative evidence: evidence that reflects a large sample of cases. This evidence is more likely to reflect the actual probability of an outcome. For example, you could search for data revealing whether or not new bitcoin investors are making money. A recent study reveals that three in four new bitcoin investors have lost money on their initial investments. Because this research reflects the experiences of a large representation of bitcoin investors, it’s more reliable than a single colleague’s testimony.
(Shortform note: Marketers sometimes leverage our tendency to base our choices on case evidence, as opposed to stronger representative evidence that may work against these marketers’ interests. Experts on marketing claim that you can increase your sales if you share testimonials from current customers who enjoyed your product or service. These testimonials draw in new customers who feel compelled by others’ success stories, even if those testimonials aren’t representative of the average customer experience.)
Concepts From Systems Thinking
According to Bevelin, another factor that impairs our decision-making is our failure to engage in systems thinking. This causes us to make decisions that produce unintended consequences. He explains that every action exists within a larger system, so you should always consider the far-reaching consequences of your actions. He defines a system as a complex web of interdependent components, such as a business that has multiple employees, processes, and clients. Systems thinking is a type of analysis in which you consider how interdependent parts affect the whole.
Smaller Systems Also Matter
Most of the examples of systems that Bevelin provides in Seeking Wisdom are large, complex systems such as businesses, factories, organizations, and economies. However, smaller systems also have a significant impact on our lives, and Bevelin’s ideas on actions and consequences apply to these systems as well. Here are several examples of smaller systems and how Bevelin’s ideas also apply to these systems:
Your body is a system. It comprises interdependent parts—organs, fluids, and hormones—that work together to serve the whole. When you modify your body in some way, you should be aware of how that change may affect another part of your body. For instance, medications that treat one problem may cause another, such as insomnia.
Your family is a system. Each of your family members is part of a complex web of relationships. For instance, if you speak harshly to one of your family members, that might produce a negative ripple effect: It may send the message to your other family members that it’s fine to be harsh, which may lead them to be unkind toward other family members.
Your everyday habits make up a system. In Atomic Habits, James Clear uses the word system to describe a series of behaviors that together lead to an outcome. For instance, imagine you have the goal to lose weight. You may create a system of behaviors that together achieve this goal: Eat healthier, get more sleep, and exercise more frequently. The positive feelings you get from exercising may motivate you to practice the other healthy habits.
Let’s explore two concepts from systems thinking that can help us avoid unintended consequences.
Concept 1: Limiting Factors Affect the Larger System
First, Bevelin emphasizes that most systems have a limiting factor that affects the performance of the entire system. A limiting factor is an element of a system that many other aspects of the system depend on. If the limiting factor fails or is slow, then the whole system will also fail or operate slowly. When you’re trying to improve or grow a system, you should identify a system’s limiting factor, then improve it, thereby improving the system as a whole. When we fail to understand the limiting factors of the systems in our lives, our efforts to improve them are doomed to fail.
Let’s explore Bevelin’s concept of limiting factors by thinking of a common system: your home and family. Imagine you recently had a child and you hope to continue growing your family. The city you live in is a limiting factor in your home system because if you have to keep paying such high rent, you may never be able to afford a larger home to accommodate a larger family. To change this limiting factor, move to a new city with lower rent so you can afford a larger living space.
What Errors Do We Make When We Fail to Consider Limiting Factors?
While Bevelin claims that we can avoid making errors by identifying and improving a system’s limiting factors, he doesn’t explore in depth what types of errors we make when we fail to consider limiting factors. In Thinking in Systems, Donella Meadows identifies two major errors we make when we fail to identify and improve limiting factors.
First, according to Meadows, we sometimes misidentify the limiting factor, which prevents us from supporting the system’s growth or improvement. For instance, at first, you may misidentify the small size of your apartment as the factor limiting your family’s growth. However, moving to a larger apartment will only increase your rent, which may make it impossible to afford a second child. The real limiting factor is the city’s sky-high rent.
Second, Meadows claims that limiting factors change over time, especially as a system grows. Your system may fail to reach your goals for it if you think the limiting factor remains the same. For instance, imagine you move your family to a new city with lower rent so you can afford a larger home. Now, there may be new factors that limit the potential growth of your family, such as the cost of education in your new neighborhood.
Concept 2: Actions Produce Far-Reaching, Unintended Effects
Furthermore, Bevelin argues that because you exist in complex systems, your actions can produce far-reaching consequences, including unintended consequences. If you fail to predict your actions’ consequences, one of your decisions may cause a ripple effect that produces negative outcomes elsewhere in the system.
(Shortform note: A term people sometimes use to describe the unpredictability of consequences is “the butterfly effect.” This term was coined by a meteorologist whose weather predictions led him to conclude that a small change in a system could produce unexpected, large outcomes elsewhere in the system. He reasoned that the flap of an additional butterfly’s wings in one region of the world could contribute enough wind current to create a tornado elsewhere.)
Let’s illustrate Bevelin’s idea of unintended, far-reaching consequences using the example of the family as a system. Imagine you accept a promotion at your company so you can earn a higher income and better provide for your family. You become so busy in your new role that you spend less time at home. Your partner is left with more parenting responsibilities, and this imbalance creates tension in your relationship. Your children are upset by this tension, and they begin to act out at school.
To increase the likelihood that your actions within a system will lead to your intended outcomes, Bevelin claims that you should try to predict the far-reaching consequences of your actions. For example, before taking the promotion you’re offered, discuss with your partner how it may affect them. Then, you two could brainstorm ways to reduce the burden of parenting on your partner. For instance, you could ask other family members for support watching the kids or enroll your children in a fun after-school program.
(Shortform note: In Thinking in Systems, Donella Meadows offers a visual technique for better understanding systems, which could help improve your ability to predict your actions’ far-reaching consequences. She recommends that you draw a diagram of your system by mapping out each of its parts and how they’re connected. Doing so forces you to notice how your systems’ elements interrelate, helping you notice how one change might trigger another change elsewhere in the system. For example, if you’re a manager who’s restructuring your department, create an organizational chart that visualizes the hierarchies within your department and the relationships among your employees.)
Bevelin offers the caveat that it’s impossible to closely examine the entire system surrounding your actions since systems are extremely complex. Rather than spending your time trying to predict every effect your actions will have (which is impossible), expect that your actions will have an unintended, negative consequence—and plan for those outcomes.
Bevelin shares a strategy from Warren Buffett on how to plan for unexpected, negative outcomes: Build safety factors into your predictions. A safety factor is a buffer that you can add to your prediction. If it turns out your predictions are too optimistic, then the safety factor helps you avoid disaster. Buffett claims that his company won’t buy a stock when they estimate the value will be only slightly more than its price. Instead, they build a safety factor into their purchasing decisions by only buying a stock if they estimate its value will be significantly higher than its price.
An Additional Way to Plan for Unintended, Negative Consequences
Donella Meadows’s ideas on resilience in Thinking in Systems offer additional guidance on how to guard your system against unintended, negative consequences. While Buffett’s safety factor strategy may work well for predictions that involve calculations that can easily be adjusted to provide a buffer, Meadows’s ideas may work well for situations that don’t involve numeric predictions, such as making a plan to improve your mental health.
According to Meadows, the best way to guard your system against unexpected outcomes is to make it resilient in advance. She defines a resilient system as one that can perform well in a wide range of situations, both positive and negative. Additionally, resilient systems have built-in backup mechanisms that can serve as a safety net.
For example, you could make your mental health resilient by investing in multiple levels of support. You could engage in therapy, go on medication, and identify people in your life who can serve as your support system. That way, if one level of support unexpectedly failed (for instance, if you forgot to refill your medication), you’d have other backup systems that could prevent you from experiencing a crisis.
Want to learn the rest of Seeking Wisdom in 21 minutes?
Unlock the full book summary of Seeking Wisdom by signing up for Shortform.
Shortform summaries help you learn 10x faster by:
- Being 100% comprehensive: you learn the most important points in the book
- Cutting out the fluff: you don't spend your time wondering what the author's point is.
- Interactive exercises: apply the book's ideas to your own life with our educators' guidance.
Here's a preview of the rest of Shortform's Seeking Wisdom PDF summary:
What Our Readers Say
This is the best summary of Seeking Wisdom I've ever read. I learned all the main points in just 20 minutes.
Learn more about our summaries →Why are Shortform Summaries the Best?
We're the most efficient way to learn the most useful ideas from a book.
Cuts Out the Fluff
Ever feel a book rambles on, giving anecdotes that aren't useful? Often get frustrated by an author who doesn't get to the point?
We cut out the fluff, keeping only the most useful examples and ideas. We also re-organize books for clarity, putting the most important principles first, so you can learn faster.
Always Comprehensive
Other summaries give you just a highlight of some of the ideas in a book. We find these too vague to be satisfying.
At Shortform, we want to cover every point worth knowing in the book. Learn nuances, key examples, and critical details on how to apply the ideas.
3 Different Levels of Detail
You want different levels of detail at different times. That's why every book is summarized in three lengths:
1) Paragraph to get the gist
2) 1-page summary, to get the main takeaways
3) Full comprehensive summary and analysis, containing every useful point and example