What is the New Jim Code? Is technology truly fair and objective? Do algorithms simply crunch numbers and deliver results based on data?
Ruha Benjamin introduces a troubling concept called “the New Jim Code” in her argument that digital systems perpetuate inequality. Keep reading to learn how seemingly neutral technologies might be quietly reshaping who gets opportunities and who gets left behind.
What Is the New Jim Code?
Benjamin contends that we’ve entered a new phase in the evolution of racism. She calls this “the New Jim Code,” a term that echoes Michelle Alexander’s “New Jim Crow” to emphasize the continuity of racial control mechanisms throughout American history. This latest iteration involves racism embedded in digital technologies and algorithms. Examples include facial recognition software that struggles to accurately identify darker-skinned faces and risk assessment algorithms that disproportionately flag Black individuals as “high risk” for committing crimes.
Digital technology increasingly mediates access to opportunities and resources, so when these systems embed racial biases, they exacerbate inequalities. Consider healthcare algorithms that determine patient care: When these systems use past medical spending as a proxy for medical need, they recommend less care for Black patients than for white patients with the same symptoms—not because Black patients are healthier, but because historical racism in healthcare meant they had less access to expensive treatments in the past. Similarly, mortgage-lending algorithms trained on historical loan data can perpetuate decades of redlining by denying loans to qualified applicants in predominantly Black neighborhoods.
(Shortform note: Today’s healthcare algorithms are a modern chapter in a long history of racial injustice in medicine. For example, enslaved Black Americans were used as unwilling experimental subjects by white physicians, and J. Marion Sims, often called “the father of American gynecology,” perfected his surgical techniques by operating on enslaved women without anesthesia. Similarly, Henrietta Lacks’s cancerous cells were taken without consent in 1951 and became the foundation for countless medical advances, yet her family received no compensation or recognition for decades. Such exploitation created medical knowledge that primarily benefited white patients while establishing patterns of unequal care that continue today.)
The Invisibility of the New Jim Code
Benjamin argues that the New Jim Code is particularly insidious because we think of technology as neutral, objective, and fair. She explains that the algorithms that drive modern technology operate through numbers, statistics, and code rather than through explicit racial categories, so their outputs seem data-driven rather than opinion-based. This veneer of scientific authority shields discriminatory outcomes from scrutiny: We’re more likely to question a hiring manager’s judgment than an algorithm’s determination that certain candidates are “not a good fit.” The technology’s complexity also creates plausible deniability: Developers can claim they never programmed the algorithm to discriminate, even when that’s precisely what it does.
For these reasons, many people reject the idea that technology can perpetuate racism. However, Benjamin argues that a system doesn’t have to be built by someone with explicit racial animus or malicious intent to produce racist outcomes. She understands racism as a systemic force rather than a personal attitude and argues we should judge systems by their effects rather than their intentions.
How Do Algorithms Encode Bias? Benjamin’s “New Jim Code” argues that seemingly neutral algorithms can perpetuate racial inequalities. But what exactly is an algorithm, and how can mathematical calculations be biased? An algorithm is simply a set of rules or instructions for solving a problem or performing a task. In computing, algorithms process inputs (data) to produce outputs (decisions or recommendations) based on predefined criteria. While the mathematical operations themselves may be neutral, several mechanisms can introduce bias. First, algorithms make assumptions about what’s “normal” based on patterns in their training data. A facial recognition system trained mostly on white faces develops a mathematical model optimized for recognizing features common to those faces. When presented with darker-skinned faces, it may perform poorly because its model isn’t calibrated for those features. Second, many algorithms use “collaborative filtering” to make recommendations based on patterns of similarity between users. This introduces biases: Such algorithms tend to recommend already-popular items, struggle with new items that have few ratings, and create increasingly homogeneous recommendations over time. People who don’t match the most common patterns in the data bear the brunt of these mathematical biases. For example, if an algorithm learns that, historically, most successful job applicants attended Ivy League universities, it will prioritize graduates from those institutions, creating a disadvantage for candidates who attend historically Black colleges or community colleges. When algorithms make biased predictions that influence real-world decisions (like who gets loans, jobs, or healthcare), those decisions generate new data that reinforces the original bias. This creates a self-reinforcing cycle of technological discrimination that appears objective because it’s expressed through mathematical formulas. |
Benjamin contends that the combination of perceived objectivity and technical opacity makes racial discrimination under the New Jim Code harder to identify and challenge than many past manifestations of racism. The New Jim Code operates across virtually every domain of modern life—from healthcare, education, and employment to housing, criminal justice, and social services—making it potentially the most pervasive and difficult-to-challenge iteration of racism yet.
(Shortform note: Most of us don’t realize how many decisions are now mediated by algorithms: credit card applications, job application screening, college admissions, medical diagnoses, criminal sentencing, content recommendations, welfare eligibility, mortgage approvals, insurance rates, and targeted advertising. As Hannah Fry (Hello World) notes, “We’ve invited these algorithms into our courtrooms and our hospitals and our schools, and they’re making these tiny decisions on our behalf that are subtly shifting the way our society is operating.” People rarely know when they’ve been disadvantaged by an algorithm, making these systems particularly difficult to challenge compared to overtly discriminatory practices of the past.)