What roles do intuition and reason play when it comes to our moral choices? How and why do we seek to justify these choices?
Social psychologist Jonathan Haidt argues that we use intuition rather than reason to make most of our moral choices. Then we use moral reasoning to justify our decisions to others. Basically, we are more concerned that others think we’re doing the right thing than we are concerned about actually doing the right thing.
Read more to learn about how we make moral choices and then justify them.
Intuition and Reason
If morality is largely a cultural construct, do intuition or rationality play any part in our moral choices? Yes, but their roles may surprise you.
The human mind functions something like an elephant with a rider on top. The elephant, which represents intuition, makes most of the decisions, guiding itself and the rider in different directions in response to all of the stimuli it takes in. The rider, or reason, can occasionally affect the elephant’s path a bit, but it’s mostly there to explain the decisions of the elephant after the elephant makes them. Moral reasoning is thus not a search for any empirical truth as much as it is a method by which we justify our moral choices.
We change our minds only when people we respect talk to and appeal to our intuition. We’ll listen to them because we are social creatures who are desperate for the approval of our peers. Essentially, we care more about others thinking we’re doing the right thing than we do about actually doing the right thing.
How We Justify Our Moral Choices
The fact that we’re social creatures is key to understanding why we make the moral choices we do. We act “morally” primarily because we fear the social ramifications of getting caught acting immorally—we behave in ways we know we could justify to others if we had to. In this sense, the purpose of moral reasoning is to help us advance socially, whether by maintaining our reputations as moral individuals or persuading others to take our side in conflicts. Consequently, we think much more like a politician trying to win over constituents than a scientist looking for truth. Five examples prove this point:
- We are fascinated by polling data (of ourselves): Experiments show that no matter how much someone says they don’t care what others think of them, their self-esteem will plummet when told that strangers don’t like them and will rise rapidly when told strangers do. On an unconscious level, we’re constantly measuring our social status. The elephant part of the mind is concerned about what others think of us, even if the “rider,” the rational mind, isn’t.
- We all have a “press secretary,” constantly justifying everything: In other words, we all have confirmation bias and are constantly on the hunt, like a press secretary, for evidence that justifies our way of thinking. Simultaneously, we ignore anything that might challenge it. Research shows that people with higher IQs can generate more arguments to support a viewpoint, but only for their own side. As soon as the elephant leans in a direction, the rider starts looking for reasons to explain it.
- We rationalize cheating and lying so well that we can convince ourselves we’re honest: Like politicians, when given the opportunity and plausible deniability, most people will cheat but still believe that they are virtuous. They cheat up to the point where they can no longer rationalize the cheating: In one study, when a cashier handed a subject more money than she was due, only 20% of the subjects corrected the mistake—because they were passive participants in the transaction, they could reconcile keeping the extra money with the belief that they were honest people. However, when the cashier asked if the amount was correct, 60% of people corrected the cashier’s mistake and gave the extra money back—in this case, it was harder to deny responsibility for the mistake because the cashier directly asked them about it.
- We can reason ourselves into any idea: If we want to believe in something, we ask, “Can I believe it?” and look for reasons to believe. As soon as we find a piece of evidence, even if it’s weak, we stop searching and feel justified in that belief. On the other hand, if we don’t want to believe something, we ask, “Must I believe it?” and look for reasons not to. If we find even one piece of counterevidence, we feel justified in not believing it. In sum, unlike scientists, who generally change their theories in response to the strongest evidence, most people believe what they want to believe.
- We believe any evidence that supports our “team”: This is why people don’t vote based on their self-interest. Rather, people care about their groups—political, racial, regional, religious—and base their decisions on their participation in those groups. For example, when people are shown hypocritical statements made by political leaders in their chosen party, they start squirming and looking for justifications. On the other hand, when they see the same hypocrisy from an opponent, they delight in it and don’t attempt to justify it. Furthermore, when they’re shown a statement that releases their candidate from something that looked hypocritical, they get a hit of dopamine. The brain of the partisan starts to need that dopamine—being a partisan person is literally addictive.
These rationalizations don’t lead or create our morality. Rather, rationalizations happen after we make moral choices in order to justify our intuition and convince others (and ourselves) that we’re moral beings.
When we have a better grasp of what drives moral choices and their justification, we can better understand ourselves and each other.
———End of Preview———
Like what you just read? Read the rest of the world's best book summary and analysis of Jonathan Haidt's "The Righteous Mind" at Shortform.
Here's what you'll find in our full The Righteous Mind summary:
- Why we all can't get along
- How our divergent moralities evolved
- How we can counter our natural self-righteousness to decrease political divides