PDF Summary:Race After Technology, by Ruha Benjamin
Book Summary: Learn the key points in minutes.
Below is a preview of the Shortform book summary of Race After Technology by Ruha Benjamin. Read the full comprehensive summary at Shortform.
1-Page PDF Summary of Race After Technology
In Race After Technology, Princeton professor Ruha Benjamin reveals how racism becomes embedded in the technologies that increasingly shape our lives—from facial recognition and predictive policing to healthcare algorithms and hiring software. Benjamin introduces “the New Jim Code” to explain how seemingly objective digital systems amplify historical patterns of discrimination while hiding behind a veneer of scientific neutrality.
Our guide unpacks Benjamin’s discussion of how racism has evolved over time, how it manifests as the New Jim Code today, and her strategies for countering technology-mediated racism. In our commentary, we explore additional perspectives on Benjamin’s insights about the New Jim Code—connecting them to the art world’s treatment of Black subjects, Indigenous data sovereignty movements, science fiction’s warnings about unaudited technology, and philosopher Hannah Arendt’s concept of “thoughtlessness.”
(continued)...
Biased Architecture: When Prejudice Is Built Into the System
Biased architecture—in Benjamin’s term, “engineered inequity”—is created when technology reinforces social biases by learning from flawed data. If an algorithm is trained on historical data that reflects societal prejudices—like the hiring records from companies that rarely hired minorities—it will reproduce those patterns of discrimination. This becomes especially problematic when these algorithms make important decisions about people’s lives, determining who gets job interviews, loan approvals, or shorter prison sentences. The discrimination gets built right into systems that seem objective but actually perpetuate inequality.
An example of this is how résumé-screening algorithms trained on historical hiring data often disadvantage Black applicants. Applicants with names suggesting they’re white receive more callbacks than those with identical resumes but Black-sounding names. When these human biases become encoded in automated hiring systems that screen résumés based on patterns from past hiring data, the discrimination becomes systematized and hidden behind seemingly objective technical processes. Companies using such algorithms may believe they’re making “data-driven” hiring decisions when, in reality, they’re perpetuating historical patterns of exclusion rather than evaluating candidates based on their actual qualifications.
Biased Architecture and the Complexity of Identity
Benjamin’s concept of biased architecture is relevant beyond race, extending to other dimensions of identity. For example, facial recognition technology is biased against transgender, non-binary, and gender non-conforming people.
Researcher Os Keyes analyzed 30 years of research on automatic gender recognition (AGR) systems and found that these technologies were built upon fundamentally binary conceptions of gender that see gender as immutable and purely physiological. Because facial recognition systems are trained on datasets and models that assume gender is binary, fixed, and determinable from physical features alone, the resulting technology systematically misgenders transgender and non-binary people.
The consequences extend beyond misidentification: As facial recognition becomes integrated into security systems, travel, and identity verification, people whose appearances don’t conform to algorithmic expectations face increasing barriers. For instance, transgender travelers often experience invasive screenings at TSA checkpoints. This illustrates Benjamin’s broader point that ostensibly neutral technological systems can perpetuate existing social hierarchies and create new forms of discrimination.
Invisible Exclusion: When “Neutral” Design Leaves People Out
Invisible exclusion—or “default discrimination,” in Benjamin’s terms—occurs when technology is designed with only the dominant group in mind. When developers (often white men) create and test products primarily for and on people like themselves, they overlook how the technology might work differently for others. Even without intending harm, these “universal” designs inevitably function better for some people than others.
Benjamin notes this happens in many forms of technology we use every day: For example, some public restrooms have automatic soap dispensers that don’t recognize darker skin tones, and search engines show white people when you search for “professional hairstyles.” Sometimes, there are more serious consequences, like when medical devices are calibrated for lighter skin, such as pulse oximeters that give less accurate readings on darker skin and miss dangerous oxygen level drops in Black patients.
How Our Idea of “Normal” Is Anything But Neutral
Benjamin’s concept of invisible exclusion connects to a broader pattern of how dominant cultural norms become physically embedded in our environment, creating what Jon Stewart called “racial wallpaper”: pervasive symbols and structures that privileged groups often don’t notice but that constantly communicate to marginalized groups that they don’t fully belong to. Rebecca Solnit (Men Explain Things to Me) describes this as making people “feel important or disposable depending on who you are,” pointing out, as an illustration, the ability of Confederate monuments to communicate different messages to different viewers. In the same way, tech design choices encode differing messages about who matters.
This environmental encoding of exclusion operates through what sociologists call “unmarked categories”: characteristics that are so normalized they become invisible to those who have them. Whiteness, maleness, and able-bodiedness function as unmarked defaults, and those who fall outside these defaults must constantly navigate environments not built with them in mind, experiencing what disability scholars call “misfit” between their bodies and their surroundings.
What makes invisible exclusion particularly insidious is precisely its invisibility to those not affected by it. A white developer might never notice that their facial recognition algorithm fails on darker skin, just as many white Americans might walk past Confederate monuments without considering how these symbols affect their Black neighbors. Both represent what philosopher Charles Mills calls “epistemologies of ignorance”: a state of not-knowing that allows privileged groups to maintain an illusion that the systems we live under are neutral when in reality, they’re far from it.
Disguised Control: When Surveillance Masquerades as Help
Disguised control—or, as Benjamin puts it, “technological benevolence”—describes how technologies that monitor, control, and restrict are often presented as helpful, progressive innovations. This framing makes it difficult to criticize potentially harmful systems because they come wrapped in the language of assistance and improvement.
Electronic benefits transfer (EBT) cards, for example, exemplify disguised control. EBT cards, used to distribute welfare benefits like food assistance, are promoted as efficient and modern alternatives to paper vouchers or checks. However, they also function as a tool of surveillance and control—governments can track recipients’ purchases, restrict what they can buy, and suspend benefits with little transparency or recourse. People who pay for food with credit cards or cash aren’t subject to the same scrutiny.
Disguised Control and the Digital Poorhouse
Benjamin’s analysis of “disguised control” squares with Virginia Eubanks’ discussion of the “digital poorhouse,” a modern technological infrastructure that subjects poor and working-class people to heightened surveillance, automated decision-making, and punitive oversight under the guise of efficiency and modernization. Eubanks writes in Automating Inequality that just as 19th-century poorhouses physically contained and stigmatized those in poverty, today’s digital systems track, monitor, and control marginalized communities through technological means.
When welfare systems implement biometric identity verification or track every purchase made with EBT cards, they extend America’s long tradition of treating poverty as a moral failing that requires correction through surveillance and control. Perhaps most troubling is how these technologies scale. Complex verification systems and digital interfaces create significant technical barriers for accessing essential services. Eubanks documents how after Indiana switched to automated eligibility systems for public assistance, the state denied more than a million applications in less than three years.
Unlike physical poorhouses, digital systems can expand rapidly with minimal cost, potentially affecting millions of lives before their impacts are fully understood. These systems don’t merely digitize existing processes: They transform the relationship between citizens and the state, often in ways that extend rather than ameliorate existing inequalities.
How Can We Challenge the New Jim Code?
Benjamin doesn’t just diagnose the problem of racism in technology. She also offers a path forward to a future where we design technologies that challenge rather than reinforce racial hierarchies. She recommends two ideological shifts—taking a race-conscious approach to technology and shifting from reform to abolition—as well as some practical strategies for changing how technology is developed. Let’s explore each of her recommendations.
Take a Race-Conscious Approach to Technology
Benjamin argues against “colorblind” approaches to technology that dominate the tech industry today. These approaches typically either treat race as irrelevant to technology development or focus narrowly on hiring a few people of color without changing how technologies are designed. Many companies believe that simply adding one or two Black or Brown faces to their teams “solves” technological racism, but they continue to build products that ignore how race shapes users’ experiences.
Meanwhile, developers often operate under the false assumption that they can build neutral tools in a biased world—that if they simply ignore race in their design process, their technologies will work equally well for everyone. The result is technologies that inevitably reproduce and sometimes amplify existing inequalities.
Instead, Benjamin advocates for race-conscious design: deliberately considering how racism operates and actively working to create technologies that challenge rather than reinforce racial hierarchies. Race-conscious design involves:
- Acknowledging how racism shapes data, institutions, and social contexts
- Examining how technologies might impact different racial groups before deployment
- Including diverse perspectives throughout the design process, not just as an afterthought
- Building tools specifically aimed at exposing and challenging racial inequities
- Implementing structural changes to power dynamics, including regulatory frameworks, community oversight mechanisms, and legal remedies that allow people to seek redress for algorithmic harm
Benjamin highlights the Algorithmic Justice League (AJL) as an example of an organization that embodies race-conscious design principles. Founded by Joy Buolamwini after her research uncovered significant racial and gender bias in facial recognition systems, the AJL works to highlight and address algorithmic bias through research, advocacy, and art. The organization created the Safe Face Pledge, a commitment companies could take to prohibit the use of facial recognition in weaponry or lethal systems and to increase transparency about how facial recognition is used, particularly in policing.
(Shortform note: The Safe Face Pledge initiative was later sunset when it became clear that self-regulation was insufficient, as major tech companies refused to sign on despite being given a clear path to mitigate harms. This outcome demonstrates Benjamin’s point that voluntary corporate commitments alone are inadequate—addressing technological racism requires changes to the fundamental power structures of the tech industry, not just technical fixes or individual goodwill.)
The Limits of Colorblindness
Benjamin’s call for race-conscious approaches to technology challenges one of the most persistent myths in American society: that the best way to achieve racial equity is to ignore race entirely. This is the “colorblind” ideology, which the authors of The Myth of Racial Colorblindness define as the belief that skin color doesn’t play a role in interactions between people or in the policies that shape our institutions. It may seem appealing, but research shows it often perpetuates rather than reduces racial inequities. This is because colorblindness allows us to ignore the ways that well-meaning people participate in practices that reproduce segregation, disadvantage minorities, and restrict opportunities.
As a result of this ignorance, sociologists note, colorblind ideology leaves people without adequate language to discuss race and examine their own biases. By contrast, race-conscious design gives us the language and frameworks to identify how racism operates in society. The alternative to colorblindness isn’t division—it’s a more honest and effective path to equity, one that recognizes that we can’t solve problems we refuse to see.
Thus, instead of taking a colorblind approach to overcoming racism, tech companies might find success with a framework called targeted universalism. In targeted universalism, you set universal goals but account for people’s different starting points instead of treating everyone the same regardless of their circumstances (which reproduces inequality). For example, a facial recognition system might have the universal goal of accurately identifying all users, but would need targeted approaches in its development and testing to ensure it works equally well for people with darker skin tones.
Shift From a Focus on Reform to Demands for Abolition
Benjamin distinguishes between reformist approaches that try to make biased technologies slightly less discriminatory and abolitionist approaches that question whether certain technologies should exist at all in their current forms. She argues that some technologies, like predictive policing and digital surveillance, are so fundamentally embedded in systems of racial control that they cannot be reformed. Instead, they must be abolished and replaced with alternatives that center justice and community wellbeing.
What Is Abolition, and Why Do Advocates Argue It’s the Way Forward?
The concept of abolition has deep historical roots that inform Benjamin’s technological framework. It’s most famously associated with the 19th-century movement to end slavery—rather than simply seeking reforms to make slavery more humane, historical abolitionists recognized that the entire system needed to be dismantled and replaced with systems that affirmed Black people’s human dignity and collective wellbeing. But abolition has always been about more than simply dismantling harmful systems—it has simultaneously demanded the creation of liberatory alternatives, and contemporary abolitionists apply this understanding to modern systems of control like policing, prisons, and surveillance.
As Critical Resistance, a modern abolitionist organization, explains, abolition is “about undoing the society we live in because the [prison industrial complex] both feeds on and maintains oppression and inequalities through punishment, violence, and controls millions of people.” Critical Resistance advocates replacing the prison industrial complex with community-based systems of care, accountability, and support that address the root causes of harm—such as poverty, racism, and lack of access to education, housing, and healthcare—rather than relying on punishment and incarceration.
Benjamin highlights the Appolition app as an example of abolitionist technology that addresses injustice in the bail system. The bail system keeps people in pretrial detention simply because they can’t afford to pay for their freedom. Because marginalized people are less likely to be wealthy, this system disproportionately harms communities of color. While a reformist approach might focus on creating a “fairer” risk assessment algorithm to determine bail amounts—one that appears race-neutral but still relies on factors correlated with race like arrest history or zip code—Appolition takes a fundamentally different path.
The app helps people collectively pool small donations (rounded-up spare change from everyday purchases) that are sent to community bail funds. These organizations use the money to free people from pretrial detention while simultaneously working toward ending the cash bail system entirely. Each time someone bailed out through these funds returns to court without a financial incentive, it provides evidence that the entire premise of cash bail is unnecessary. Through this dual approach—providing immediate relief while building evidence against the system’s necessity—Appolition demonstrates how abolitionist technologies can address both symptoms and root causes simultaneously.
Appolition’s Radical Challenge to Tech-Based “Reform”
Appolition embodies three key abolitionist principles that distinguish it from reformist approaches. First, it provides material aid rather than surveillance, providing community bail funds to free people from pretrial detention without any strings attached. Second, it creates solidarity across racial and economic lines, allowing users from all backgrounds to contribute to Black liberation while educating them about systemic injustice. Third, and perhaps most importantly, Appolition challenges the very logic of the system it addresses.
While critics might question whether small-scale interventions can meaningfully disrupt the multi-billion-dollar bail industry, this view misunderstands abolition’s dual strategy. For each person freed from pretrial detention, the impact is transformative—preventing the cascade of job loss, housing instability, and forced guilty pleas that detention causes. Simultaneously, each successful return to court without financial incentive accumulates as evidence against the system’s foundational premise. Though the app has a modest financial footprint (raising approximately $200,000 in its first years), it proves that while abolitionist technologies may not bring dramatic overnight transformation, they can contribute to a slow stripping of resources and legitimacy from racist systems.
Make Practical Changes to the Existing Tech Industry
In addition to ideological shifts like a race-conscious approach and an abolitionist framework, Benjamin suggests practical changes we could implement within the existing tech industry to mitigate the racist impacts of technology.
Diversify the Tech Workforce
First, Benjamin argues that who builds technology matters. The homogeneity of the tech workforce—which consists mainly of white and Asian men—contributes to blind spots in design and development. As we’ve discussed, many companies try to address this via superficial diversity initiatives, where they hire a few people from underrepresented groups. However, Benjamin argues these initiatives are inadequate because they place the burden of “fixing” bias on minority employees without changing underlying power dynamics. For example, a Black engineer might flag a potentially discriminatory feature, but if they don’t have the authority to change the project’s course, their insight will remain unheeded.
In contrast, Benjamin says meaningful diversity entails creating conditions where everyone can shape the technologies being built. This requires:
- Increasing representation at all levels, especially in leadership and decision-making positions
- Creating inclusive environments where diverse perspectives are valued and can influence product decisions
- Addressing structural barriers to entry and advancement
- Compensating people for expertise drawn from lived experience, not just technical credentials
The Politics of Diversity
When Benjamin discusses the diversity of the tech workforce, she’s highlighting a stark reality: Women hold 27% of computing roles in the US, but only 3% of these positions are held by Black women and just 2% by Latina women. Women of color also report a significantly lower sense of belonging compared to their peers. Researchers say little progress has been made despite substantial attention: While the tech sector has slowly diversified, it’s doing so at a slower pace than the rest of the American economy. When diversity does increase, it’s often concentrated at the executive level, suggesting companies may be responding to public pressure with token appointments rather than systemic change.
Research from Harvard Business School reveals that tokenization—being the only woman or person of color on a team—creates psychological burdens that hurt performance. This suggests that Benjamin’s emphasis on meaningful diversity, with representation at all levels and inclusive environments where diverse perspectives can actually influence decisions, remains essential for both justice and innovation. The question is whether tech companies will prioritize these values in an increasingly hostile political environment.
Since the 2023 Supreme Court decision striking down race-conscious college admissions, a coordinated backlash against diversity initiatives has swept through corporate America. Companies that once touted their commitment to diversity are now pursuing these efforts “under the radar” or abandoning them altogether. In 2023-2024, organizations dedicated to expanding opportunities for underrepresented groups in tech were forced to close due to funding cuts. Some diversity advocates have even begun rebranding their initiatives to avoid scrutiny, replacing terms like “women in tech” with language about “leadership development.”
These developments underscore how quickly progress toward meaningful diversity in tech can be reversed when political pressures mount.
Audit Technologies for Bias
Second, Benjamin advocates for rigorous testing and auditing of technologies for discriminatory impacts before deployment. Effective auditing includes:
- Testing with diverse populations and in diverse contexts
- Examining training data for historical biases
- Analyzing outcomes across different demographic groups
- Continuous monitoring for unexpected discriminatory effects
She emphasizes that these audits must have teeth: They must lead to substantive changes when bias is found. For example, when facial recognition technology shows higher error rates for darker-skinned faces, a superficial adjustment might be to simply add a disclaimer about potential inaccuracy. A substantive change would be redesigning the system with more diverse training data or even delaying deployment until acceptable accuracy across all demographics is achieved. The difference is whether the burden of accommodating the flawed technology falls on the technology itself or on those it misidentifies.
Technology Audits in Science Fiction and Reality
Benjamin’s call for technology audits that lead to real change echoes a powerful theme in science fiction literature: the profound consequences of technological systems designed without anticipating their full social impacts. For example, it recalls Philip K. Dick’s “Minority Report,” where a seemingly objective “precrime” system predicts future criminal behavior based on historical patterns. This predictive technology appears infallible until its hidden biases and fundamental flaws are exposed, which (spoiler alert) leads to the dismantling of the precrime program. This is an example of the kinds of substantive changes Benjamin advocates.
Another example of substantive changes being made post-audit comes from real life: Amazon developed an AI recruiting tool, found that it discriminated against women, and jettisoned it. The tool, which was trained on ten years of predominantly male résumés in the tech industry, taught itself that male candidates were preferable. It even penalized résumés containing terms like “women’s” and downgraded graduates from women’s colleges. When engineers attempted to neutralize these specific biases, they realized the system could still develop other discriminatory sorting methods they couldn’t anticipate. Rather than simply adding disclaimers or making superficial adjustments, Amazon made the substantive decision to disband the project entirely.
These examples illustrate Benjamin’s point that meaningful auditing isn’t about identifying bias and continuing business as usual with a disclaimer. It’s about creating accountability that forces technological systems to adapt to human diversity rather than forcing vulnerable populations to adapt to technological limitations. Without this approach, even well-intentioned technologies will perpetuate inequality, turning dystopian fiction into lived reality for marginalized communities.
Involve Communities in Design
Third, Benjamin explains that technologies shouldn’t be developed in isolation from the communities they affect. She advocates for participatory design—an approach that directly involves the people who will be impacted by a technology in its creation process. This looks like:
- Consulting community members at the earliest stages to determine whether a technology is even needed
- Compensating community members for their expertise
- Giving communities real decision-making power throughout development
- Evaluating success based on community-defined metrics
For example, consider predictive policing software that directs police to certain neighborhoods based on historical crime data, without those neighborhoods’ input. A participatory alternative to predictive policing would begin by asking residents about their safety priorities. This could lead to entirely different technologies—like community-run emergency networks that connect people with mental health professionals, or digital platforms for coordinating neighborhood watch programs, mutual aid networks, and restorative justice initiatives. These alternatives would address real community needs (safety, support, and connection) rather than impose surveillance- and punishment-focused solutions.
Indigenous Data Sovereignty as a Model for Technological Justice
The CARE Principles (Collective Benefit, Authority to Control, Responsibility, Ethics) offer a powerful real-world example of what Benjamin’s participatory design looks like in practice. Developed by Indigenous data sovereignty movements worldwide, these principles show how communities traditionally excluded from technology design can create alternative frameworks that center their needs and values. Unlike conventional approaches that merely invite community “input” while keeping decision-making power with technologists, the CARE Principles redistribute control. When communities design technologies for themselves, they often create systems that look nothing like what outside experts would design for them.
Take the contrast between mainstream data frameworks (like FAIR: Findable, Accessible, Interoperable, Reusable) and the Indigenous CARE approach. Where FAIR focuses on technical accessibility, CARE centers community well-being and self-determination. This shift isn’t just semantic: It reflects fundamentally different values about who technology should serve and how success should be measured.
This illustrates precisely what Benjamin means by participatory design: When communities have genuine decision-making power throughout the technology development process, they don’t just tweak existing technologies—they reimagine them entirely. For example, while conventional approaches like FAIR might focus on making data more “open,” Indigenous communities prioritize protocols for data use that respect cultural knowledge and collective ownership.
Establish Regulatory Frameworks
Finally, while Benjamin focuses primarily on how technology is designed and developed, she also notes the importance of external oversight. Effective regulatory frameworks might include:
- Legal remedies for those harmed by algorithmic discrimination
- Transparency requirements that make automated decision-making processes understandable to those affected
- Limits on the use of certain technologies in high-risk contexts, such as criminal justice or housing
These external guardrails create accountability when internal processes fail to prevent racist impacts. For example, people denied housing based on algorithmic assessments should have the right to understand the factors that influenced that decision and to challenge potentially discriminatory outcomes. This contrasts with the current black-box nature of many algorithmic systems, where those affected have no way to understand or contest decisions that impact their lives.
The Banality of Technological Evil
Benjamin’s call for regulatory frameworks to govern technology recalls philosopher Hannah Arendt’s warnings about “thoughtlessness” in Eichmann in Jerusalem, her analysis of Nazi bureaucrat Adolf Eichmann. As the administrator who organized the transportation of millions to Nazi concentration camps, Eichmann was responsible for mass murder on an industrial scale. Yet Arendt was struck not by Eichmann’s monstrosity but by his ordinariness—what she called “the banality of evil.” The evil Eichmann facilitated wasn’t rooted in malevolent intent but in his “inability to think”—that is, to question the system within which he operated or to consider the moral implications of his actions.
Today’s algorithmic systems present a similar peril. Like Eichmann, who hid behind “officialese” and bureaucratic language that sanitized atrocities, today’s technological systems employ neutral-sounding terms like “risk assessment” and “predictive modeling” that obscure their discriminatory impacts. The engineers and executives who create these systems often exhibit what Arendt would recognize as thoughtlessness—not stupidity, but a failure to reflect critically on the consequences of their work.
Regulatory frameworks are essential because they can institutionalize what Arendt called “the habit of examining and reflecting upon whatever happens to come to pass.” When companies are required to document algorithmic impacts, provide transparency, and offer redress for harm, they are forced to engage in the thinking that might otherwise be bypassed in the rush toward efficiency and profit. The alternative—allowing technologies to operate as “black boxes” with no accountability—creates precisely the conditions in which technological thoughtlessness flourishes and human suffering becomes invisible behind the screen of technical complexity.
Want to learn the rest of Race After Technology in 21 minutes?
Unlock the full book summary of Race After Technology by signing up for Shortform.
Shortform summaries help you learn 10x faster by:
- Being 100% comprehensive: you learn the most important points in the book
- Cutting out the fluff: you don't spend your time wondering what the author's point is.
- Interactive exercises: apply the book's ideas to your own life with our educators' guidance.
Here's a preview of the rest of Shortform's Race After Technology PDF summary:
What Our Readers Say
This is the best summary of Race After Technology I've ever read. I learned all the main points in just 20 minutes.
Learn more about our summaries →Why are Shortform Summaries the Best?
We're the most efficient way to learn the most useful ideas from a book.
Cuts Out the Fluff
Ever feel a book rambles on, giving anecdotes that aren't useful? Often get frustrated by an author who doesn't get to the point?
We cut out the fluff, keeping only the most useful examples and ideas. We also re-organize books for clarity, putting the most important principles first, so you can learn faster.
Always Comprehensive
Other summaries give you just a highlight of some of the ideas in a book. We find these too vague to be satisfying.
At Shortform, we want to cover every point worth knowing in the book. Learn nuances, key examples, and critical details on how to apply the ideas.
3 Different Levels of Detail
You want different levels of detail at different times. That's why every book is summarized in three lengths:
1) Paragraph to get the gist
2) 1-page summary, to get the main takeaways
3) Full comprehensive summary and analysis, containing every useful point and example