Who is Sendhil Mullainathan? What were his research findings regarding judges’ bail decisions?
Sendhil Mullainathan is a professor of Computation and Behavioral Science at the University of Chicago. In 2017, Mullainathan and colleagues conducted a study about judges’ ability to predict defendants’ likelihood of committing another crime. What did they find out?
Keep reading for more about the experiment Mullainathan ran and his findings.
The Mullainathan Experiment
On a typical Thursday in Brooklyn, Judge Solomon was presiding over his courtroom. His primary responsibility for the day was arraignments. He had to see every defendant who had been arrested in the last 24 hours, look at their criminal history, listen to the testimony of both the prosecution and the defense, and then decide if the defendant would be offered bail and the chance to be released from custody. In short, Judge Solomon had to look a perfect stranger in the eye, assess his character, and decide if he deserved his freedom. But does looking a person in the eye actually help you judge his nature?
A team from the University of Chicago, led by Sendhil Mullainathan, set out to answer that question. The experiment went like this:
- Mullainathan gathered the data of all 554,689 defendants that went through the NYC courts from 2008-2013. They found that 400,000 of those defendants had been released by the judges that presided over their arraignments.
- Mullainathan built a computer with an artificial intelligence system.
- The computer was fed the data of the same 554,689 cases. It then made its own list of the 400,000 defendants least likely to commit a crime while out on bail.
- The defendants chosen by the computer were 25% less likely to commit a crime while waiting for trial than the defendants released by human judges.
- The computer flagged 1% of the defendants as “high risk,” not suited to be released. The human judges presiding over those cases released nearly half of that 1% high-risk subgroup.
- The defendants selected by the human judges as “high risk” were not consistent with the computer—in fact, they were from all different groups of risk likelihood, as predicted by the computer.
Translating the Results
The computer was much better than human judges at determining a defendant’s likelihood of committing another crime. The study found that judges were not only setting their standards for release too low, but they were also actually mis-ranking many defendants completely.
It is important to note that the human judges that presided over these cases had three resources available to them when making their bail decision:
- The defendant’s record
- The testimony of the attorneys
- The judge’s own personal observations of the defendant standing before him.
Mullainathan’s computer only had one of these three resources—the record of each defendant. Yet the machine still beat human judges when it came to making bail decisions.
———End of Preview———
Like what you just read? Read the rest of the world's best book summary and analysis of Malcolm Gladwell's "Talking to Strangers" at Shortform .
Here's what you'll find in our full Talking to Strangers summary :
- Why we don't understand strangers
- How to talk to strangers in a cautious way so you don't get fooled
- How Hitler deceived so many world leaders