
Is racism embedded in the digital technologies that increasingly shape our daily lives? Do algorithms and AI systems perpetuate centuries-old patterns of discrimination?
Ruha Benjamin’s Race After Technology: Abolitionist Tools for the New Jim Code addresses these urgent questions. Benjamin’s work aims to show how seemingly neutral digital systems—from hiring software to healthcare algorithms—actually amplify racial inequalities in new and often invisible ways.
Keep reading for an overview of this thought-provoking book.
Overview of Ruha Benjamin’s Race After Technology
Ruha Benjamin’s Race After Technology: Abolitionist Tools for the New Jim Code argues that race is a technology—a tool we use to organize society into hierarchies that benefit the people who have the most power. This technology has evolved over centuries, and one way it operates today is through digital systems, from search engines to surveillance systems to predictive algorithms. As these systems increasingly shape our access to employment, healthcare, housing, and justice, they amplify historical patterns of discrimination—while hiding behind claims of objectivity and fairness. Benjamin refers to this phenomenon as “the New Jim Code” and characterizes it as the latest evolution in America’s long history of racial control and discrimination.
Benjamin is a professor of African American Studies at Princeton University and founding director of the Ida B. Wells Just Data Lab. When she published Race After Technology in 2019, facial recognition systems were already misidentifying Black faces at alarming rates, predictive policing algorithms were disproportionately targeting Black neighborhoods, and hiring algorithms were reproducing historical patterns of employment discrimination. Benjamin argues that these weren’t isolated failures, but systemic effects of the New Jim Code.
This overview of Race After Technology unpacks Benjamin’s argument that racial categories function as technologies of control, explores how the New Jim Code embeds racism into technology, and examines Benjamin’s recommendations for race-conscious design and abolitionist tools.
What Is Race? What Is Racism?
Benjamin argues that race is a powerful social technology used to separate people into groups, stratify those groups, and explain away injustice. Racism functions as the operating system for this technology: the set of beliefs, practices, and structures that make race “work” as a tool of control. Throughout American history, this operating system has been deliberately engineered to maintain hierarchies of power and to justify inequitable distributions of resources.
Benjamin explains that, like other technologies, race (and the system of racism that makes it function) has gone through multiple iterations, each designed to maintain racial control when previous versions faced challenges or resistance. When one form of racial control becomes socially or politically untenable, racism doesn’t disappear—it adapts. Each new iteration is less visible and more difficult to challenge than the last. To illustrate this, Benjamin traces the evolution of racism through three major phases; let’s explore each.
Slavery: Denying Humanity
Benjamin explains that racism emerged as a way to reconcile the glaring contradiction between America’s proclaimed ideals of liberty and equality and the brutal reality of enslavement. By denying the full humanity of Black people, America could maintain both its democratic rhetoric and its racial hierarchy. This earliest, most explicit form of racism required little disguise: It operated through open claims of racial inferiority and dehumanization.
Jim Crow: Explicit Denial of Rights
When slavery ended in 1865, Jim Crow laws evolved to serve the same purpose: maintaining white supremacy by explicitly denying Black Americans access to voting, education, housing, and economic opportunities. Benjamin explains that these laws ensured Black Americans remained a subordinate class, despite formal freedom. Though less extreme than slavery, Jim Crow racism remained explicit: The laws openly specified different treatment based on race.
The New Jim Crow: Implicit Denial of Rights
As civil rights legislation dismantled explicit segregation in the 1960s, a new racist technology emerged. Known as the New Jim Crow, from the 2010 book of the same name by Michelle Alexander, this system of discrimination operated through ostensibly “race-neutral” policies that nevertheless perpetuated racial inequalities—primarily the War on Drugs.
The War on Drugs was a federal effort to combat drug use and distribution via stricter law enforcement. Policies associated with this effort didn’t mention race but led to the mass incarceration of Black Americans, largely because laws mandated harsher penalties for crack cocaine, which was more common in Black communities, than for powder cocaine, which was more common in white communities. Such policies exacerbated racial inequality while allowing America to claim it had moved beyond racism after the Civil Rights Movement.
What Is ‘The New Jim Code’?
After explaining the history of racism in the US, Benjamin contends that we’ve entered a new phase in the evolution of racism. She calls this “the New Jim Code,” a term that echoes Alexander’s “New Jim Crow” to emphasize the continuity of racial control mechanisms throughout American history. This latest iteration involves racism embedded in digital technologies and algorithms—for example, facial recognition software that struggles to accurately identify darker-skinned faces and risk assessment algorithms that disproportionately flag Black individuals as “high risk” for committing crimes.
Digital technology increasingly mediates access to opportunities and resources, so when these systems embed racial biases, they exacerbate inequalities. Consider healthcare algorithms that determine patient care: When these systems use past medical spending as a proxy for medical need, they recommend less care for Black patients than for white patients with the same symptoms—not because Black patients are healthier, but because historical racism in healthcare meant they had less access to expensive treatments in the past. Similarly, mortgage-lending algorithms trained on historical loan data can perpetuate decades of redlining by denying loans to qualified applicants in predominantly Black neighborhoods.
The Invisibility of the New Jim Code
Benjamin argues that the New Jim Code is particularly insidious because we think of technology as neutral, objective, and fair. She explains that the algorithms that drive modern technology operate through numbers, statistics, and code rather than through explicit racial categories, so their outputs seem data-driven rather than opinions-based. This veneer of scientific authority shields discriminatory outcomes from scrutiny: We’re more likely to question a hiring manager’s judgment than an algorithm’s determination that certain candidates are “not a good fit.” The technology’s complexity also creates plausible deniability: Developers can claim they never programmed the algorithm to discriminate, even when that’s precisely what it does.
For these reasons, many people reject the idea that technology can perpetuate racism. However, Benjamin argues that a system doesn’t have to be built by someone with explicit racial animus or malicious intent to produce racist outcomes. She understands racism as a systemic force rather than a personal attitude and argues we should judge systems by their effects rather than their intentions.
Benjamin contends that the combination of perceived objectivity and technical opacity makes racial discrimination under the New Jim Code harder to identify and challenge than many past manifestations of racism. The New Jim Code operates across virtually every domain of modern life—from healthcare, education, and employment to housing, criminal justice, and social services—making it potentially the most pervasive and difficult-to-challenge iteration of racism yet.
How Does the New Jim Code Operate?
Now that you know what the New Jim Code is, let’s discuss how it works. Benjamin explains that the New Jim Code operates through four key dimensions, where racial hierarchies become encoded in seemingly neutral technological systems. These dimensions don’t operate in isolation but interact with and reinforce each other, creating multiple layers through which racism becomes embedded into the technologies that shape our lives.
Asymmetric Visibility: The Paradox of Being Watched But Not Seen
Asymmetric visibility—what Benjamin calls “coded exposure”—describes how technologies selectively focus attention on certain aspects of marginalized groups while rendering other aspects invisible. Benjamin explains that algorithms tend to amplify stereotypical views of marginalized communities—often as threats or problems to be managed—while simultaneously failing to recognize their individuality, humanity, and specific needs.
An example of this is how content moderation algorithms on social media platforms disproportionately flag posts by Black users, especially those written in African American Vernacular English, as “offensive” or “hateful.” Even when the same message is posted by a Black user and a white user, the Black user’s post is more likely to be removed. These algorithms, trained on data labeled by humans who bring their own biases to the task, end up amplifying racial discrimination under the guise of neutral content policies. The result is a digital environment where Black expression is hypervisible to surveillance systems as potentially problematic content, yet invisible in terms of its cultural context and value.
Biased Architecture: When Prejudice Is Built Into the System
Biased architecture—in Benjamin’s term, “engineered inequity”—is created when technology reinforces social biases by learning from flawed data. If an algorithm is trained on historical data that reflects societal prejudices—like the hiring records from companies that rarely hired minorities—it will reproduce those patterns of discrimination. This becomes especially problematic when these algorithms make important decisions about people’s lives, determining who gets job interviews, loan approvals, or shorter prison sentences. The discrimination gets built right into systems that seem objective but actually perpetuate inequality.
An example of this is how résumé-screening algorithms trained on historical hiring data often disadvantage Black applicants. Applicants with names suggesting they’re white receive more callbacks than those with identical resumes but Black-sounding names. When these human biases become encoded in automated hiring systems that screen résumés based on patterns from past hiring data, the discrimination becomes systematized and hidden behind seemingly objective technical processes. Companies using such algorithms may believe they’re making “data-driven” hiring decisions when, in reality, they’re perpetuating historical patterns of exclusion rather than evaluating candidates based on their actual qualifications.
Invisible Exclusion: When “Neutral” Design Leaves People Out
Invisible exclusion—or “default discrimination,” in Benjamin’s terms—occurs when technology is designed with only the dominant group in mind. When developers (often white men) create and test products primarily for and on people like themselves, they overlook how the technology might work differently for others. Even without intending harm, these “universal” designs inevitably function better for some people than others.
Benjamin notes this happens in many forms of technology we use every day: For example, some public restrooms have automatic soap dispensers that don’t recognize darker skin tones, and search engines show white people when you search for “professional hairstyles.” Sometimes, there are more serious consequences, like when medical devices are calibrated for lighter skin, such as pulse oximeters that give less accurate readings on darker skin and miss dangerous oxygen level drops in Black patients.
Disguised Control: When Surveillance Masquerades as Help
Disguised control—or, as Benjamin puts it, “technological benevolence”—describes how technologies that monitor, control, and restrict are often presented as helpful, progressive innovations. This framing makes it difficult to criticize potentially harmful systems because they come wrapped in the language of assistance and improvement.
Electronic benefits transfer (EBT) cards, for example, exemplify disguised control. EBT cards, used to distribute welfare benefits like food assistance, are promoted as efficient and modern alternatives to paper vouchers or checks. However, they also function as a tool of surveillance and control—governments can track recipients’ purchases, restrict what they can buy, and suspend benefits with little transparency or recourse. People who pay for food with credit cards or cash aren’t subject to the same scrutiny.
How Can We Challenge the New Jim Code?
Benjamin doesn’t just diagnose the problem of racism in technology. She also offers a path forward to a future where we design technologies that challenge rather than reinforce racial hierarchies. She recommends two ideological shifts—taking a race-conscious approach to technology and shifting from reform to abolition—as well as some practical strategies for changing how technology is developed. Let’s explore each of her recommendations.
Take a Race-Conscious Approach to Technology
Benjamin argues against “colorblind” approaches to technology that dominate the tech industry today. These approaches typically either treat race as irrelevant to technology development or focus narrowly on hiring a few people of color without changing how technologies are designed. Many companies believe that simply adding one or two Black or Brown faces to their teams “solves” technological racism, but they continue to build products that ignore how race shapes users’ experiences.
Meanwhile, developers often operate under the false assumption that they can build neutral tools in a biased world—that if they simply ignore race in their design process, their technologies will work equally well for everyone. The result is technologies that inevitably reproduce and sometimes amplify existing inequalities.
Instead, Benjamin advocates for race-conscious design: deliberately considering how racism operates and actively working to create technologies that challenge rather than reinforce racial hierarchies. Race-conscious design involves:
- Acknowledging how racism shapes data, institutions, and social contexts
- Examining how technologies might impact different racial groups before deployment
- Including diverse perspectives throughout the design process, not just as an afterthought
- Building tools specifically aimed at exposing and challenging racial inequities
- Implementing structural changes to power dynamics, including regulatory frameworks, community oversight mechanisms, and legal remedies that allow people to seek redress for algorithmic harm
Benjamin highlights the Algorithmic Justice League (AJL) as an example of an organization that embodies race-conscious design principles. Founded by Joy Buolamwini after her research uncovered significant racial and gender bias in facial recognition systems, the AJL works to highlight and address algorithmic bias through research, advocacy, and art. The organization created the Safe Face Pledge, a commitment companies could take to prohibit the use of facial recognition in weaponry or lethal systems and to increase transparency about how facial recognition is used, particularly in policing.
(Shortform note: The Safe Face Pledge initiative was later sunset when it became clear that self-regulation was insufficient, as major tech companies refused to sign on despite being given a clear path to mitigate harms. This outcome demonstrates Benjamin’s point that voluntary corporate commitments alone are inadequate—addressing technological racism requires changes to the fundamental power structures of the tech industry, not just technical fixes or individual goodwill.)
Shift From a Focus on Reform to Demands for Abolition
Benjamin distinguishes between reformist approaches that try to make biased technologies slightly less discriminatory and abolitionist approaches that question whether certain technologies should exist at all in their current forms. She argues that some technologies, like predictive policing and digital surveillance, are so fundamentally embedded in systems of racial control that they cannot be reformed. Instead, they must be abolished and replaced with alternatives that center justice and community wellbeing.
Benjamin highlights the Appolition app as an example of abolitionist technology that addresses injustice in the bail system. The bail system keeps people in pretrial detention simply because they can’t afford to pay for their freedom. Because marginalized people are less likely to be wealthy, this system disproportionately harms communities of color. While a reformist approach might focus on creating a “fairer” risk assessment algorithm to determine bail amounts—one that appears race-neutral but still relies on factors correlated with race like arrest history or zip code—Appolition takes a fundamentally different path.
The app helps people collectively pool small donations (rounded-up spare change from everyday purchases) that are sent to community bail funds. These organizations use the money to free people from pretrial detention while simultaneously working toward ending the cash bail system entirely. Each time someone bailed out through these funds returns to court without a financial incentive, it provides evidence that the entire premise of cash bail is unnecessary. Through this dual approach—providing immediate relief while building evidence against the system’s necessity—Appolition demonstrates how abolitionist technologies can address both symptoms and root causes simultaneously.
Make Practical Changes to the Existing Tech Industry
In addition to ideological shifts like a race-conscious approach and an abolitionist framework, Benjamin suggests practical changes we could implement within the existing tech industry to mitigate the racist impacts of technology.
Diversify the Tech Workforce
First, Benjamin argues that who builds technology matters. The homogeneity of the tech workforce—which consists mainly of white and Asian men—contributes to blind spots in design and development. As we’ve discussed, many companies try to address this via superficial diversity initiatives, where they hire a few people from underrepresented groups. However, Benjamin argues these initiatives are inadequate because they place the burden of “fixing” bias on minority employees without changing underlying power dynamics. For example, a Black engineer might flag a potentially discriminatory feature, but if they don’t have the authority to change the project’s course, their insight will remain unheeded.
In contrast, Benjamin says meaningful diversity entails creating conditions where everyone can shape the technologies being built. This requires:
- Increasing representation at all levels, especially in leadership and decision-making positions
- Creating inclusive environments where diverse perspectives are valued and can influence product decisions
- Addressing structural barriers to entry and advancement
- Compensating people for expertise drawn from lived experience, not just technical credentials
Audit Technologies for Bias
Second, Benjamin advocates for rigorous testing and auditing of technologies for discriminatory impacts before deployment. Effective auditing includes:
- Testing with diverse populations and in diverse contexts
- Examining training data for historical biases
- Analyzing outcomes across different demographic groups
- Continuous monitoring for unexpected discriminatory effects
She emphasizes that these audits must have teeth: They must lead to substantive changes when bias is found. For example, when facial recognition technology shows higher error rates for darker-skinned faces, a superficial adjustment might be to simply add a disclaimer about potential inaccuracy. A substantive change would be redesigning the system with more diverse training data or even delaying deployment until acceptable accuracy across all demographics is achieved. The difference is whether the burden of accommodating the flawed technology falls on the technology itself or on those it misidentifies.
Involve Communities in Design
Third, Benjamin explains that technologies shouldn’t be developed in isolation from the communities they affect. She advocates for participatory design—an approach that directly involves the people who will be impacted by a technology in its creation process. This looks like:
- Consulting community members at the earliest stages to determine whether a technology is even needed
- Compensating community members for their expertise
- Giving communities real decision-making power throughout development
- Evaluating success based on community-defined metrics
For example, consider predictive policing software that directs police to certain neighborhoods based on historical crime data, without those neighborhoods’ input. A participatory alternative to predictive policing would begin by asking residents about their safety priorities. This could lead to entirely different technologies—like community-run emergency networks that connect people with mental health professionals, or digital platforms for coordinating neighborhood watch programs, mutual aid networks, and restorative justice initiatives. These alternatives would address real community needs (safety, support, and connection) rather than impose surveillance- and punishment-focused solutions.
Establish Regulatory Frameworks
Finally, while Benjamin focuses primarily on how technology is designed and developed, she also notes the importance of external oversight. Effective regulatory frameworks might include:
- Legal remedies for those harmed by algorithmic discrimination
- Transparency requirements that make automated decision-making processes understandable to those affected
- Limits on the use of certain technologies in high-risk contexts, such as criminal justice or housing
These external guardrails create accountability when internal processes fail to prevent racist impacts. For example, people denied housing based on algorithmic assessments should have the right to understand the factors that influenced that decision and to challenge potentially discriminatory outcomes. This contrasts with the current black-box nature of many algorithmic systems, where those affected have no way to understand or contest decisions that impact their lives.