PDF Summary:The Technological Republic, by

Book Summary: Learn the key points in minutes.

Below is a preview of the Shortform book summary of The Technological Republic by Alexander C. Karp and Nicholas W. Zamiska. Read the full comprehensive summary at Shortform.

1-Page PDF Summary of The Technological Republic

Silicon Valley’s brightest engineers optimize ads and build food delivery apps while America’s rivals race ahead in military AI—the technology that will determine 21st-century dominance. In The Technological Republic, Palantir executives Alexander C. Karp and Nicholas W. Zamiska argue that the US tech industry is wasting its enormous talent on consumer products instead of on threats facing the nation. Drawing on their experience building defense technology, the authors contend that the US must reunite Silicon Valley with the Pentagon, revive its sense of national purpose, and launch a “new Manhattan Project” to lead AI development.

In this guide, we’ll unpack Karp and Zamiska’s argument: what they contend the US has lost, how cultural shifts have severed the tech-government partnership, and what they say must change. We’ll also examine competing visions of technology’s purpose, what Palantir’s work actually entails, whether the AI arms race is as inevitable as they claim, and what their vision reveals about Silicon Valley’s definition of public service.

(continued)...

Ultimately, since neither of you can trust that the other will refrain from developing it, you both have to assume the other will develop it, and thus you have to also. This means that if everyone believes an AI arms race is unavoidable, it becomes unavoidable, even if cooperation, through mutual restraint, would be safer.

The only way a prisoner’s dilemma can be resolved with maximum benefit to all parties is if all parties trust each other not to act in their own individual self-interest. In other words, cooperation becomes a rational choice only if you can verify the other side isn’t secretly working against you.

In the case of AI competition, this verification may actually be possible. Unlike some military technologies that can be developed in secret, AI projects require visible infrastructure: massive data centers, enormous energy consumption, and specialized chip manufacturing. The visibility of these activities would make it possible to verify that countries are honoring agreements not to develop military AI.

What Needs to Change?

Karp and Zamiska argue that the US must restore what they call a “technological republic,” a tradition they trace to the nation’s founding, when leaders like Benjamin Franklin and Thomas Jefferson were themselves scientists and engineers. They envision an American society where the tech industry reunites with government to serve national purposes, like defense and intelligence, where engineers and business leaders have collective goals, and where the pursuit of overwhelming AI superiority becomes a shared priority.

(Shortform note: The founders understood science as a tool for solving problems for the public good and enabling democratic governance, as when John Adams based the nation’s checks and balances system, which ensures policy emerges from evidence, on scientific principles. Scholars contend that this democratic character isn’t incidental, but is what makes science work. Science’s authority comes from the collaborative processes and norms that structure how scientists vet competing ideas. Karp and Zamiska root the idea of a technological republic in this tradition, but emphasize a slightly different focus: the utility of science to build geopolitical superiority, rather than to solve domestic challenges.)

In this section, we’ll explain the three conditions the authors believe are necessary to remake the US as a technological republic: the tech industry reuniting with the defense and intelligence establishment, Americans recovering collective belief in national purpose and Western values, and the US government pursuing AI dominance through massive investment.

The Tech Industry Must Reunite With Government

First, Karp and Zamiska advocate for the tech industry to reunite with the government. They believe tech companies must prioritize working on defense, intelligence, and other “public goods.” One reason to rebuild this partnership is that markets don’t naturally prioritize what society needs most urgently. So if companies continue building whatever generates the highest returns, they won’t tackle the hard problems in national security and law enforcement that the government is working on. The authors also argue that the industry has a responsibility to support the state that enabled its success, since the American system of universities, capital markets, legal protections, and infrastructure made Silicon Valley possible.

(Shortform note: Economists explain that public goods like national defense create challenges for democratic accountability. Because defense is funded through taxation, citizens can’t signal how much they want to spend or what priorities matter most to them. Instead, defense spending is determined through interactions between politicians, bureaucrats, and special interest groups. A Pentagon program that sends military officers to work for defense contractors for a year shows this dynamic: Officers return with recommendations that benefit those companies. Additionally, defense contractors give more campaign contributions to members of Congress who vote for larger military budgets, whether or not their constituents support these increases.)

Karp and Zamiska write that the tech industry should work with the government because only the government can coordinate technological development at the scale required to maintain military superiority and AI dominance. They argue tech companies should view the government as a collaborator rather than an obstacle, and that engineers should see working on national security challenges as honorable rather than morally suspect. The industry should also accept that some of its most talented people should be working on problems the government identifies as priorities—autonomous weapons systems, surveillance technologies, battlefield AI—even if those applications make some people uncomfortable.

How Palantir’s Technology Works—and the Questions It Raises

Palantir’s involvement in US immigration enforcement demonstrates its commitment to serving the government’s needs. The company’s platforms integrate data from disconnected government systems, creating interfaces that help agencies make faster decisions. For US Immigration and Customs Enforcement (ICE), this means combining Social Security records, IRS data, border crossings, and student visas into searchable profiles that agents can use to track people and coordinate operations. Palantir’s ELITE system then assigns “confidence scores” about where people might be located and identifies areas where multiple targets are.

But what agents do with that data is up to ICE: When ICE’s operations using Palantir systems attract criticism, the company notes it only builds the tools, while the government makes the rules on how they’re used. This points to questions about accountability: If Congress passes laws, ICE interprets them, and contractors build the tools ICE requests, who’s responsible for the outcome?

Steven Hubbard of the American Immigration Council contends that defense contractors’ claims of being mere tool-makers don’t hold up to scrutiny, arguing the design choices they make constitute policymaking: Palantir’s decisions on what data to integrate, who to flag, and what conditions prompt an alert can shape how ICE uses its tools. These debates resist easy answers, and speak to ongoing controversy as to who’s ultimately responsible for decisions.

America Must Recover Its Belief in National Purpose

Second, Karp and Zamiska argue the US must reject cultural relativism—the idea that no culture or set of values is better than another—and recover its conviction about national identity and the superiority of western civilization. They see this shift as necessary because the intellectual movements that dismantled university courses on western civilization—and taught that western narratives of democracy and progress were just exercises of power—trained students to be suspicious of claims about values or purpose. This created a reluctance to commit to any belief that might offend someone or course of action that might limit someone else’s choices; anyone taking a strong position now risks being accused of cultural imperialism or insensitivity.

The authors argue this intellectual stance has become destructive because it prevents societies from articulating any shared purpose or defending any particular way of life. If no values are better than others, then there’s no basis for saying democracy is preferable to authoritarianism, or that individual rights matter more than state control. People educated in this framework have no way to explain why the tech industry should work on collective challenges rather than just pursuing profit. Karp and Zamiska contend that this leaves much of Silicon Valley knowing that they oppose certain things (like building technology for the military), but without the ability to articulate what they support or believe America should stand for.

The Paradox of Relativism and Fascism

Cultural relativism emerged from anthropology as a tool for understanding cultures on their own terms before making judgments. But in political discourse, “relativism” has come to mean the embracing of subjectivism, historicism, skepticism, materialism, or nihilism—essentially, whatever intellectual tendencies a speaker opposes. In other words, many political thinkers use “relativism” less as a label for a precise philosophical position than as a diagnosis of cultural malaise.

By arguing that cultural relativism prevents people from committing to any set of values, Karp and Zamiska draw on a specific intellectual tradition: the idea that the West lost its moorings when it stopped believing in “transcendence”—the principle that moral truth comes from a source beyond society and carries authority no matter what any culture believes. This idea emerged from conservative analyses of 20th-century totalitarianism. After World War II, conservatives argued that intellectuals who taught that values are culturally constructed left people unable to defend democracy when fascist and communist ideologies arrived.

This narrative has shaped debates about values and identity for decades, but the historical record reveals a wrinkle. Italian fascist Benito Mussolini celebrated relativism’s “contempt for fixed categories” and argued that fascism’s strength lay in its flexibility to change tack as circumstances demanded. But Nazi Germany took the opposite stance: Nazi intellectuals opposed relativism, associating it with Jews, liberalism, and the weakening of German society. To justify genocide as scientifically necessary, Nazi ideology depended on claims of Aryan racial superiority. Both movements committed atrocities and concentrated authoritarian power despite holding opposite positions on whether objective values exist.

Therefore, historical precedent suggests that whether a country accepts or rejects the idea of cultural relativism, neither would necessarily serve as a guarantee against the dangers of authoritarianism.

Karp and Zamiska argue it’s crucial that the US recovers its collective belief, not just personal conviction, because democracy requires more than just individual rights and market economics to function. It needs cultural coherence, which consists of shared stories, rituals, and values. People need a purpose to motivate them to tackle difficult challenges—something more than wealth, and more like the national goals that the US’s authoritarian rivals use to mobilize societies. Karp and Zamiska insist the US must rebuild a shared culture and recover its belief in Western values—and they emphasize democracy, individual rights, and technological progress as core to Western identity.

(Shortform note: Research confirms that a belief in shared values fosters social trust, bolstering Karp and Zamiska’s case that cultural coherence matters for social cohesion. But research suggests that coherence doesn’t require a state authority to impose values from above. Instead, sociologist Erving Goffman found that social order emerges naturally from everyday interactions, and trust develops as people collaboratively work out shared norms. Additionally, research shows that having diverse ideas doesn’t inherently undermine cohesion. What erodes social trust is perceived polarization: the belief that differences of opinion are irreconcilable and divide society into hostile camps.)

The US Must Pursue AI Dominance

Third, Karp and Zamiska call for a “new Manhattan Project” to develop AI systems for defense. They look to the Cold War to explain why we should see this as an urgent priority. During the Cold War, peace rested on mutually assured destruction: No nation would start a war that would end in annihilation for both sides, making parity in nuclear arms capability sufficient for preventing all-out war. But in AI-powered warfare, they argue, the decisive advantage will belong to whoever develops the most sophisticated systems first. Achieving overwhelming superiority could secure American power for generations.

Is Nuclear Deterrence the Right Model for AI Competition?

Many experts agree with Karp and Zamiska that AI could position the US for geopolitical dominance, and they see nuclear deterrence as a model for determining how to proceed in the AI arms race. But where the authors argue the US needs overwhelming AI superiority, others propose different strategies. Some argue for Mutually Assured AI Malfunction (MAIM), a framework where major powers threaten to sabotage each other’s AI projects to prevent any nation from achieving total dominance. This mutual vulnerability creates stability, much as mutually assured destruction functioned during the Cold War.

Others contend that international treaties modeled on Cold War arms control offer the most promising path. During the Cold War, the US and the Soviet Union had “very low trust” yet established treaties limiting nuclear arsenals: Scientists from both sides continued to meet even when their governments were at an impasse, drafting major arms control agreements. Advocates of similar treaties for AI argue that satellites could monitor AI datacenters, and an organization equivalent to the International Atomic Energy Agency could verify compliance with treaties limiting AI development.

Still others argue, however, that AI’s unique characteristics undermine any deterrence framework. Unlike nuclear tests that produce detectable radioactive particles, there’s no clear way to know when a rival achieves advanced AI. AI development happens in commercial companies rather than in secure government efforts like the Manhattan Project. This means military AI capabilities are being developed by private actors with their own goals, in facilities vulnerable to espionage and sabotage. This situation has no nuclear precedent.

Karp and Zamiska argue the tech industry must commit its capabilities to making AI development a national priority. They outline an ambitious vision for a new Manhattan Project: government funding at unprecedented scale for AI research with military applications; collaboration between Silicon Valley and the Department of Defense, intelligence agencies, and the military; and a focus on developing autonomous weapons, targeting systems, drone swarms, and other tech that can make decisions faster than humans.

Would a “Manhattan Project” Actually Produce the AI the Military Needs?

The call for a new Manhattan Project for AI reflects an assumption among policymakers that artificial general intelligence (AGI)—AI that matches or exceeds human cognitive abilities—will be critical to military dominance. The 2026 National Defense Authorization Act mandates a Pentagon committee to evaluate AI systems that “approach or achieve artificial general intelligence” for defense applications, suggesting that Congress views AGI development as a national security priority. The applications Karp and Zamiska advocate—autonomous weapons, targeting systems, and drone swarms—would likely require AGI-level capabilities to function as envisioned.

However, it’s unclear whether a Manhattan Project-style effort could actually produce AGI. Fully autonomous military systems that can make battlefield decisions in real time would need to operate with minimal scaffolding (the specialized tools, context, and human guidance that AI requires to perform tasks) across unpredictable environments—a goal many researchers question the feasibility of.

It’s also unclear if we’d have meaningful ways to measure progress toward AGI. Among AI researchers, there’s no consensus on what AGI means, what form it would take, or how to build it. A 2025 survey found that 76% of AI researchers believe current approaches won’t achieve it. The original Manhattan Project succeeded because it had a clear, measurable goal using known scientific principles: building an atomic bomb through nuclear fission. In contrast, AGI has no agreed-upon definition or measurable endpoint. Even if massive government funding accelerated current approaches, this suggests that we lack reliable ways to determine whether we’re making progress toward AGI.

How Can We Build a Technological Republic?

Karp and Zamiska offer their experience at Palantir as a model for how to build the technological republic they envision. Their recommendations, which we’ll explore in this section, are intended for several different audiences: private organizations, government agencies, and public institutions. They offer a vision of how the US can put outcomes above bureaucracy, make innovation accessible to the government, and ensure that the people making decisions have stakes in the technological republic they’re building.

Distribute Authority to Those With Direct Information

First, Karp and Zamiska argue that innovative organizations should give decision-making power to the people closest to actual problems, rather than forcing everything to go through management hierarchies. This principle applies to any organization facing complex, rapidly changing challenges, whether they’re tech companies, government agencies, or corporations working in more traditional industries. The authors believe this approach played an important role in Silicon Valley’s early successes, but as companies grew, many of them lost this culture. Government agencies, in particular, desperately need to learn it.

Palantir applies this through what the authors call “shadow hierarchies,” where the formal trappings of hierarchy are absent: no org charts determining who has authority, no corner offices or reserved parking signaling status. Authority still exists, but it isn’t formalized in job titles or reporting structures. This means talented people can claim authority by simply doing the work, rather than first asking permission from clearly-defined bosses. While Karp and Zamiska admit this creates some confusion about who’s in charge of what, it enables people to step forward and solve problems. This approach relies on having a founder or clear ultimate authority who can serve as final arbiter when needed.

Palantir also encourages what Karp and Zamiska call “constructive disobedience.” This means employees can challenge how leadership wants to implement strategy (though not the strategy itself). Like musicians in an orchestra who have expertise the conductor lacks, employees absorb the overall direction, but then reshape the business’s tactical approaches to get better results. The authors explain that this contrasts with the emphasis that many traditional corporations place on rewarding compliance, which undermines innovation because people stop thinking critically about whether they’re solving problems in the right ways.

Why Doesn’t Palantir’s “Shadow Hierarchy” Create Chaos?

Palantir has an unorthodox structure, so how does it avoid descending into chaos? The answer lies in understanding organizations as networks—systems of interconnected nodes (people) and links (relationships). A traditional hierarchy resembles a tree, with the CEO as the trunk, divisions as major branches, and information flowing up and down. The opposite is a flat network, where everyone connects to everyone else with equal authority. With 100 workers, that yields nearly 5,000 possible connections.

The structure Karp and Zamiska cite resembles something in between: a network where people connect primarily with their immediate collaborators, but the founders provide structure when needed, acting as hubs in the network. Physicist Geoffrey West’s research suggests how such structures can remain stable: Constraint at one level of a network often enables freedom at another. At Palantir, this means that the company’s founders set its strategic direction and choose what projects to take on, but they allow workers to self-organize. In network terms, they set boundary conditions within which local problem-solving can emerge spontaneously.

Claude Shannon’s information theory offers another perspective on why the right amount of structure in such networks matters. Just as innovation and productivity rely on a careful balance between too little and too much structure, so does communication rely on the right amount of uncertainty. It requires at least some uncertainty: If you already know what I’m going to say, I’m not transmitting information. But too much uncertainty (as when there’s no structure at all) creates noise through which people can’t coordinate. Palantir’s structure works because it provides an optimal level of uncertainty: Teams have enough autonomy to discover solutions that central planning can’t, but the founders can step in to resolve uncertainty when needed.

Require Government to Buy Commercial Technology When It Exists

Karp and Zamiska’s second recommendation for how to build a technological republic targets a specific barrier preventing tech companies from working with the government: Federal agencies default to building custom systems rather than buying proven commercial products. The authors argue that instead, government agencies should be required to evaluate whether commercial solutions already exist before spending years and billions of dollars developing their own.

The current procurement system (and its extensive red tape) emerged from good intentions: In the 1980s, reports that the Navy paid $435 for hammers sparked outrage about wasteful military spending. Congress responded with regulations to prevent the government from overpaying so egregiously. However, the procurement rules became so burdensome that they soon prevented even basic purchases.

For example, during the Gulf War, the Air Force couldn’t buy two-way radios from Motorola because regulations required cost-tracking systems the company didn’t have. A 1994 law, the Federal Acquisition Streamlining Act (FASA), tried to fix this by requiring agencies to buy commercially available products whenever possible, but for years, agencies largely ignored this rule, continuing to favor traditional defense contractors who build custom systems to their specific requirements.

Does Buying Commercial Products Actually Benefit the Government?

Experts have long argued that federal procurement has problems. Studies dating back to a 1987 Defense Science Board report have found that when the government builds custom software, it often fails to work as intended and costs more than buying existing commercial products. This is because commercial companies spread their development costs across many customers, making their products cheaper than one-off government projects. Additionally, when a commercial product fails, the company absorbs that loss—but when government-built projects fail, taxpayers pay for the failures. Yet experts say simply requiring agencies to buy commercial products can create a different set of problems.

Some government needs don’t match what’s available commercially. For example, the Social Security Administration bought a commercial product that worked well in testing but couldn’t handle the agency’s massive transaction volumes when deployed at full scale. The agency ended up building a custom solution. Plus, when only a handful of government agencies need a particular product, there’s little competition to constrain the price. This can create situations where a few companies dominate government sales without the competitive benefits that commercial markets are supposed to provide for the government and for taxpayers.

Karp and Zamiska claim a breakthrough happened when Palantir sued the Army in 2016 for failing to even consider its commercial software for a battlefield intelligence contract. A federal judge ruled the Army had to evaluate whether commercial options existed. Three years later, the Army awarded Palantir the contract—the first time a Silicon Valley software company beat traditional contractors to lead a major defense program. The authors argue that this shows what must change: Government agencies need to actually enforce the 1994 law, and when a proven commercial solution exists, the government should buy it rather than spending years building an inferior version from scratch.

Palantir, Security, and Surveillance

The lawsuit Karp and Zamiska cite as a breakthrough has coincided with spectacular growth for Palantir: The company’s stock rose more than 600% between 2024 and 2025, Karp received $6.8 billion in compensation in 2024, and in 2025 the Army awarded Palantir its largest contract ever, worth close to $10 billion. But critics of the company (and of The Technological Republic) say the self-interest runs deeper than financial gain. What Palantir builds and what it aims to become reveals a vision of concentrated power that many critics find alarming.

Palantir’s stated goal, according to its 2020 SEC filing, is to build a system that becomes “the central operating system for all US defense programs” and constructs a complete model of reality for the US government. To that end, the company is working to integrate data on Americans from the Department of Homeland Security, Defense Department, Health and Human Services, Social Security Administration, and Internal Revenue Service into unified databases. This surveillance infrastructure would create what Republican Representative Warren Davidson called “one ring to rule them all,” which he contends must be destroyed.

Davidson’s reference to JRR Tolkien’s Lord of the Rings is apt given that Palantir is named after the “seeing stones” in Tolkien’s trilogy. In a way, Palantir sees itself as providing a “seeing stone” that concentrates knowledge. When the Army was developing battlefield intelligence software, Palantir argued it already possessed the capability not just to analyze battlefield data, but to integrate all data into a unified system. The company’s vision of its product is totalizing: a single platform connecting every disparate source of government information into what they promote as the ultimate security network, but which the company’s critics allege is an infrastructure of absolute surveillance.

Give Employees Ownership in What They’re Building

Karp and Zamiska’s third recommendation for building a technological republic is that organizations should give employees equity stakes so their personal wealth depends on the company’s long-term success, not just on steady paychecks that arrive regardless of results. Beginning in the 1960s, Silicon Valley pioneered giving equity to all sorts of employees, not just executives—even administrative staff and junior engineers got ownership stakes. This created a culture where everyone participated materially in the company’s success. The authors argue this alignment of interests drove much of Silicon Valley’s growth: When people own part of what they’re building, they think like owners focused on long-term value, rather than like employees just trying to keep their jobs.

(Shortform note: Whether employee equity actually motivates and retains workers may depend on market conditions and career stage. A 2016 study of 135 Silicon Valley engineers found no significant relationship between equity compensation and job satisfaction or long-term retention. More recent data suggests employees’ attitudes toward equity fluctuate with the market: By late 2023, during a tech downturn, employees were choosing to exercise only 23% of their vested stock options, down from 58% two years earlier, and companies had reduced average equity grants by 37%, offering more cash instead. This pattern echoes historical cycles where enthusiasm for equity surges during bull markets but skepticism dominates during downturns.)

Pay Government Officials Competitive Salaries

The fourth recommendation the authors make is that the government should pay senior officials salaries competitive with what talented people earn in the private sector. Karp and Zamiska point to the chairman of the Federal Reserve, who earns roughly $190,000 annually—less than entry-level investment bankers—despite managing decisions that affect trillions of dollars and hundreds of millions of people. This creates two bad outcomes: Either only wealthy people can afford to work in public service, or officials have to cash in later through lucrative corporate board seats, speaking fees, and consulting contracts after leaving government.

Singapore offers an alternative model. Under Lee Kuan Yew, the country tied ministers’ salaries to comparable private sector positions, with senior officials earning over $1 million annually. The authors argue this attracts top talent into government and reduces corruption by making official compensation itself attractive enough that people don’t need side deals or post-government paydays. The logic is similar to giving employees equity: When decision-makers’ personal financial success depends on doing their jobs well—whether through company ownership or competitive government salaries—their interests align with long-term institutional success rather than with personal survival or future enrichment opportunities.

Would Competitive Salaries Attract the Right Talent to Government?

Karp and Zamiska assume that the best and brightest in Silicon Valley would excel in public service if their salaries were competitive. But research on public sector work suggests the skills and motivations that make someone successful in Silicon Valley may differ from those needed in public service. People who choose the public sector often describe their work in terms of making incremental improvements and ensuring that vulnerable people receive support. But studies show that while competitive pay attracts more qualified applicants to the public sector, it may attract people motivated only by compensation, rather than those motivated by the service-oriented values of typical public sector workers. It may also risk importing a tendency to serve corporate interests over the interests of other stakeholders, such as citizens without money or political power.

Additionally, Singapore may not be the ideal model Karp and Zamiska hope for. Research shows that raising salaries can actually increase corruption. The reason Singapore’s high salaries have not done so may be due to the authoritarian system that oversees them, where officials answer to the ruling party, severe penalties make corruption risky, and the government can enforce compliance that democratic accountability mechanisms would struggle to replicate.

Want to learn the rest of The Technological Republic in 21 minutes?

Unlock the full book summary of The Technological Republic by signing up for Shortform .

Shortform summaries help you learn 10x faster by:

  • Being 100% comprehensive: you learn the most important points in the book
  • Cutting out the fluff: you don't spend your time wondering what the author's point is.
  • Interactive exercises: apply the book's ideas to your own life with our educators' guidance.

Here's a preview of the rest of Shortform's The Technological Republic PDF summary:

Read full PDF summary

What Our Readers Say

This is the best summary of The Technological Republic I've ever read. I learned all the main points in just 20 minutes.

Learn more about our summaries →

Why are Shortform Summaries the Best?

We're the most efficient way to learn the most useful ideas from a book.

Cuts Out the Fluff

Ever feel a book rambles on, giving anecdotes that aren't useful? Often get frustrated by an author who doesn't get to the point?

We cut out the fluff, keeping only the most useful examples and ideas. We also re-organize books for clarity, putting the most important principles first, so you can learn faster.

Always Comprehensive

Other summaries give you just a highlight of some of the ideas in a book. We find these too vague to be satisfying.

At Shortform, we want to cover every point worth knowing in the book. Learn nuances, key examples, and critical details on how to apply the ideas.

3 Different Levels of Detail

You want different levels of detail at different times. That's why every book is summarized in three lengths:

1) Paragraph to get the gist
2) 1-page summary, to get the main takeaways
3) Full comprehensive summary and analysis, containing every useful point and example