Three Keys to Making Better Decisions in Life

This article is an excerpt from the Shortform book guide to "Algorithms to Live By" by Brian Christian and Tom Griffiths. Shortform has the world's best summaries and analyses of books you should be reading.

Like this article? Sign up for a free trial here .

Why is it so difficult to make decisions? Do you often regret your decisions, thinking you should have known better?

Decision-making is a cognitively taxing process, especially when the future is uncertain and the stakes are high. But it doesn’t have to be this way. According to Brian Christian and Tom Griffiths, humans already have all the tools to make smart decisions. In their book Algorithms to Live By, they explain how to make better decisions using computer algorithms.

Let’s take a look at four algorithms intended to help you make better decisions.

1. How to Know When to Settle

Christian and Griffiths’s first algorithm is: To choose the best from a series of options, explore without committing for the first 37%, then commit to the next top pick you see. This algorithm is designed to solve something mathematicians call an “optimal stopping problem”—when faced with a series of options, when do you settle down and commit to the opportunity in front of you if you don’t know what opportunities will be available in the future?

For example, imagine you’re looking for a job and know your skills are in high demand. After a couple of days of searching, you receive an offer out of the blue that’s better than any of the available positions you’ve seen so far. However, it doesn’t have everything you’re looking for. Do you take it or keep searching for better options?

According to Christian and Griffiths, statisticians have determined that the optimal way to solve this problem is to initially reject all opportunities, exploring your options to get a sense of what quality looks like. Then, at a certain point, commit to the next option that’s better than any you’ve seen so far. By calculating the probability that you pick the best option available for every possible “pivot point” from exploration to commitment, researchers have determined that you should explore for the first 37% of options, then commit to the next best opportunity.

This Optimal Solution Still Falls Short

Mathematician Hannah Fry pokes holes in Christian and Griffiths’s strategy, demonstrating how likely it is to fail: If, following the algorithm, you’re unlucky enough to encounter the best available option during your exploratory period, you’d have to reject it and go on to reject every other option available, as none will be better than what you’ve encountered already. Even though Christian and Griffiths are offering a mathematically optimal algorithm, the odds of you finding the best option, she states, are a dismal 37%.

Fry does, however, offer a solution. Christian and Griffiths define success as claiming the best opportunity available, but if you’re willing to accept an opportunity that’s good, but not the best, you can vastly increase your chance of ending up satisfied.

If you’re okay with an option in the top 5%, for example, you should begin your commitment period just 22% of the way through. According to Fry, this raises your chance of success from 37% to 57%. If you’re willing to accept an option in the top 15%, you can pivot 19% of the way through for a whopping 78% chance of success.

2. How to Optimize Your Life

Christian and Griffiths’s next algorithm is a broader directive that applies to any area of your life you want to improve: To optimize your life, pursue whatever opportunity has a chance to be the greatest.

The authors frame life as a complex “multi-armed bandit” problem, referring to a model computer scientists use in machine learning. The multi-armed bandit is a theoretical experiment in which a decision-making agent is presented with a row of slot machines (“one-armed bandits”) and must try out different machines, learning from the outcomes to figure out which will pay off the most.

Christian and Griffiths explain that the multi-armed bandit problem’s optimal solutions are called “Upper Confidence Bound” algorithms, which recommend making decisions based on your options’ best-case scenarios. Pursue whatever opportunity in life has the potential to pay out the most, even if you think it’s extremely unlikely, since the only way to know for sure whether or not it’ll pay off is to test it yourself. Then, if you’ve given something a shot and determined that it’s not worth your while, adjust accordingly and shoot for the moon somewhere else.

Preparing for Black Swans

Nassim Nicholas Taleb supports Christian and Griffiths’s advice to pursue opportunities with a low chance of outrageous success, pushing this logic even further in The Black Swan. Taleb offers a version of this idea that’s more extreme than Christian and Griffiths, asserting that you should entirely ignore an opportunity’s track record and expected gains, instead focusing on the boundaries of possible outcomes. This includes considering extremely negative outcomes, which Christian and Griffiths don’t focus on as much as positive ones.

For example, if a bank has consistently made millions giving out loans over the last forty years, you might claim that it has proven to be a well-paying “bandit” worth investing in. However, Taleb would argue that this track record means nothing and that the nature of loans comes with the ever-present devastating risk that borrowers will default. In other words, even if an opportunity presents an extremely good best-case scenario, you shouldn’t invest if it carries an equally extreme worst-case scenario—a point Christian and Griffiths neglect to consider.

3. How to Predict the Future

The next algorithm posed by Christian and Griffiths addresses the problem of an unpredictable future: To make better predictions, first, use your prior knowledge of the situation to estimate the chances of something happening, then adjust based on observable data. This strategy of using your prior beliefs to analyze the evidence you have is called “Bayes’s Rule.”

For example, if you want to predict when you’ll receive a raise at work, you might begin by asking a coworker how long it took for them to get a raise, then adjust that estimate based on how you think your boss views your performance.

Focus Only on the Information That Matters

In Superforecasting, Philip Tetlock and Dan Gardner agree that proper application of Bayes’s Rule is necessary to make accurate predictions. However, most people are bad at this kind of thinking because to use Bayesian inference, you not only need accurate knowledge of the situation, but you also need to know how impactful each piece of knowledge is. This is where many people trip up: They’re bad at determining what information actually matters. In our example above, you might overestimate how much your job performance hastens your pay raise and assume you’re due for a raise much sooner than you actually are.

Tetlock and Gardner explain that the best “superforecasters” make significantly smaller adjustments in light of new information than the average predictor. In most cases, only a few key facts will have a major impact on your forecast—so, when adjusting your prediction, ignore the vast majority of observable evidence.

4. Why You Should Make Less Informed Decisions

Christian and Griffiths’s final algorithm to aid decision-making is as follows: To make better decisions, consider less information.

With this algorithm, the authors address the problem of overfitting. In statistics and machine learning, “overfitting” occurs when a model takes too many variables into account, resulting in faulty understanding. Christian and Griffiths argue that, in the same way, if you consider too many variables when making a decision, you’ll “overfit,” overestimating the impact of insignificant information and underestimating the details that really matter.

According to Christian and Griffiths, the trick to conquering overfitting is to consciously restrict the amount of information you consider when making decisions. Identify one or two factors that matter the most and ignore everything else. For example, you may decide what job to take solely based on how much you expect to enjoy the work.

Minimalism: Stop Overfitting Your Life

Christian and Griffiths assert that to conquer overfitting, you must focus on what matters and ignore everything else. In Minimalism, Joshua Millburn and Ryan Nicodemus apply this logic to life itself

Modern humans have a tendency to overfit, trying to make themselves happier by adding more to their lives instead of focusing on the few factors that matter. Goods like luxury cars, fancy homes, and picturesque vacations do nothing but distract us from the things in life that offer the most value, like personal health, loving relationships, and a sense of contribution to others.

In general, removing things in your life that don’t add value is a more sustainable path to happiness than constantly trying to add bigger and better new pleasures.
Make Better Decisions With Computer Algorithms

———End of Preview———

Like what you just read? Read the rest of the world's best book summary and analysis of Brian Christian and Tom Griffiths's "Algorithms to Live By" at Shortform .

Here's what you'll find in our full Algorithms to Live By summary :

  • How to schedule your to-do list like a computer
  • Why making random decisions is sometimes the smartest thing to do
  • Why you should reject the first 37% of positions in your job search

Darya Sinusoid

Darya’s love for reading started with fantasy novels (The LOTR trilogy is still her all-time-favorite). Growing up, however, she found herself transitioning to non-fiction, psychological, and self-help books. She has a degree in Psychology and a deep passion for the subject. She likes reading research-informed books that distill the workings of the human brain/mind/consciousness and thinking of ways to apply the insights to her own life. Some of her favorites include Thinking, Fast and Slow, How We Decide, and The Wisdom of the Enneagram.

Leave a Reply

Your email address will not be published.