Podcasts > Making Sense with Sam Harris > #435 — The Last Invention

#435 — The Last Invention

By Waking Up with Sam Harris

In this episode of Making Sense, Sam Harris examines the potential impact of advanced artificial intelligence on society and human institutions. The discussion covers Silicon Valley's "accelerationist" movement, which aims to replace human workers and government institutions with AI systems, and explores predictions from AI researchers about the timeline for developing artificial general intelligence (AGI) and artificial superintelligence (ASI).

The episode delves into existential risks associated with advanced AI development, drawing from insights by AI safety experts who suggest that superintelligent AI could sideline humanity not through hostility, but through indifference to human interests. The discussion also addresses proposed solutions, including government regulation of AI development, restrictions on data centers, and economic measures like universal basic income to support workers affected by automation.

Listen to the original

#435 — The Last Invention

This is a preview of the Shortform summary of the Oct 2, 2025 episode of the Making Sense with Sam Harris

Sign up for Shortform to access the whole episode summary along with additional materials like counterarguments and context.

#435 — The Last Invention

1-Page Summary

Accelerationist Vision and Concerns About AI Replacing Institutions

Former Silicon Valley executive Mike Brock reveals concerns about tech leaders' plans to replace human workers and government institutions with AI systems. According to Brock, figures like Elon Musk are spearheading efforts to substitute government workers with AI technology through initiatives like the Department of Government Efficiency. Andy Mills' investigation confirms the existence of "accelerationists" in Silicon Valley who believe AI will not only disrupt democracy but fundamentally reshape the global order by making traditional jobs and nation-states obsolete.

Roadmap and Timeline For Developing AGI and ASI

Kevin Roose reports that AI researchers expect artificial general intelligence (AGI) to emerge within the next two to five years. This digital supermind would excel at nearly all cognitive tasks, potentially leading to artificial superintelligence (ASI) that would surpass overall human intelligence.

AI Safety Experts Predict Existential Risks

Geoffrey Hinton and Nick Bostrom warn about the dangers of ASI development. According to Hinton, once AGI is achieved, the progression to ASI could render humanity obsolete. William MacAskill suggests humans might become as insignificant to ASI as ants are to humans today. The experts emphasize that ASI's potential indifference to human interests, rather than outright hostility, could be sufficient to sideline humanity.

Mitigating Risks: Proposals For Regulation

While some experts advocate for halting AI development entirely, Geoffrey Hinton proposes collaborative government regulation of AI safety, including restrictions on data center expansion and whistleblower protections. He emphasizes the need for rigorous testing and transparency requirements for powerful AI technologies. The experts suggest that universities, media, and policymakers must work together to analyze AI's societal impacts and develop plans to support workers affected by automation, with some proposing measures like universal basic income to address potential job market disruption.

1-Page Summary

Additional Materials

Clarifications

  • The concept of accelerationism in the context of AI involves the belief that advancing technology, particularly artificial intelligence, will disrupt traditional systems like democracy and jobs, potentially rendering them obsolete. Concerns arise as some tech leaders aim to replace human workers and government functions with AI, leading to discussions about the societal implications of such accelerationist visions. This perspective raises questions about the impact on institutions and the global order as AI technologies continue to evolve rapidly.
  • The Department of Government Efficiency (DOGE) is an initiative introduced during the second Trump administration in the United States. It aims to enhance government operations by modernizing technology, increasing productivity, and reducing unnecessary spending and regulations. Elon Musk played a role in proposing this initiative to President Trump, and it has been associated with controversial actions such as mass layoffs and agency restructuring. DOGE's activities have sparked debates and legal challenges due to its impact on government operations and transparency.
  • Artificial General Intelligence (AGI) is a hypothetical AI system that can understand and learn any intellectual task that a human being can. It aims to possess human-like cognitive abilities across various domains. Artificial Superintelligence (ASI) goes beyond AGI, representing an AI system that surpasses human intelligence in every aspect, potentially leading to unprecedented advancements and challenges in technology and society. AGI is seen as a stepping stone towards achieving ASI, which raises concerns about the implications of creating a superintelligent entity that could outperform human capabilities in all areas.
  • Existential risks related to ASI development pertain to potential dangers arising from the creation of artificial superintelligence that could surpass human intelligence. Concerns include scenarios where ASI may not prioritize human interests, leading to unintended consequences that could threaten the existence of humanity. Experts warn that the rapid advancement and capabilities of ASI may result in scenarios where humans become marginalized or face extinction due to the ASI's actions or indifference. Mitigating these risks involves careful regulation, transparency, and collaboration to ensure the safe development and deployment of advanced AI technologies.
  • Mitigating risks through government regulation of AI safety involves implementing rules and guidelines to ensure that artificial intelligence technologies are developed and used responsibly. This can include setting standards for data privacy, establishing protocols for testing AI systems, and creating mechanisms for accountability in case of AI-related issues. Government regulation aims to address concerns such as the potential negative impacts of AI on society, including job displacement and ethical considerations. Collaboration between governments, experts, and industry stakeholders is crucial for effective regulation of AI safety.
  • Universal Basic Income (UBI) is a social welfare concept where all citizens receive regular, unconditional payments regardless of their income or employment status. It aims to provide a financial safety net, ensuring everyone has enough to meet their basic needs. UBI is distinct from means-tested benefits as it is not dependent on specific criteria like income level or employment status. The idea is to address issues like income inequality, job displacement due to automation, and poverty by providing a consistent income floor for all individuals.

Counterarguments

  • Concerns about AI replacing human workers and institutions may underestimate the adaptability of human societies and economies to new technologies, as seen in previous industrial revolutions.
  • The idea that Elon Musk or any individual can unilaterally replace government functions with AI may overstate the power of tech leaders and ignore the complexities of democratic governance and public accountability.
  • The concept of "accelerationists" may represent a fringe viewpoint rather than a mainstream Silicon Valley perspective, and their beliefs might not translate into actual policy or societal change.
  • Predictions about the emergence of AGI within two to five years could be overly optimistic, considering the history of AI predictions not materializing within expected timeframes.
  • The transition from AGI to ASI and the subsequent risks might be more gradual and manageable than some experts suggest, allowing more time for safety measures and ethical considerations.
  • The existential risks associated with ASI development could be mitigated by the inherent difficulty of achieving such a level of intelligence, or by the possibility that ASI might inherently value human life and welfare.
  • Proposals for government regulation of AI safety might not account for the global nature of AI development, which could make unilateral regulations ineffective.
  • Calls for rigorous testing and transparency in AI technologies may not consider the competitive pressures that could lead to secrecy and rapid deployment without adequate oversight.
  • The suggestion that universities, media, and policymakers work together to analyze AI's societal impacts might not address the potential for conflicts of interest or the challenge of aligning diverse stakeholders with varying incentives.
  • Proposals for universal basic income as a solution to job market disruption may not take into account the complexity of implementing such policies or the potential for unintended economic consequences.

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
#435 — The Last Invention

Accelerationist Vision and Concerns About AI Replacing Institutions

The article discusses a controversial perspective where technology leaders might advocate replacing human workers and even government institutions with artificial intelligence (AI) systems, motivated by an "accelerationist" philosophy.

Tech Leaders to Replace Bureaucrats and Jobs With AI Systems

Through the revelations made by former Silicon Valley executive Mike Brock, concerns are raised about a plot to overhaul the United States government using artificial intelligence. Brock alleges that figures like Elon Musk are leading an effort, through initiatives like Musk's Department of Government Efficiency (Doge), to substitute human workers in government with AI technology. The endgame, according to Brock, is a future in which AI systems make all imperative decisions traditionally handled by governmental institutions.

Ex-exec Alleges Silicon Valley's AI Plot to Replace Government

The term "accelerationists" was used by Brock to describe the group of Silicon Valley individuals allegedly behind this ambitious plot. This description conjures the image of a faction committed to hastening the advancement of AI technology in governance at an unprecedented pace.

Accelerationists Believe Advanced AI Will Render Jobs Obsolete

Andy Mills, through his investigations, found support for Brock's assertions, discovering a faction within Silicon Valley whose goals extend beyond merely substituting government bureaucrats with AI. These accelerationists aspi ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Accelerationist Vision and Concerns About AI Replacing Institutions

Additional Materials

Clarifications

  • The "accelerationist" philosophy advocates for hastening societal change, believing that accelerating technological progress will lead to a better future. Proponents of accelerationism often seek to push existing systems to their limits to provoke radical transformations. In the context of AI, accelerationists believe that rapidly advancing AI technologies can potentially replace human labor and even reshape fundamental societal structures. This philosophy raises concerns about the potential consequences of rapidly introducing AI into various aspects of society without adequate consideration for the broader impacts.
  • Mike Brock is a former Silicon Valley executive who has made claims about a plot within the tech industry to replace human workers and government institutions with AI systems. His credibility is based on his insider knowledge and firsthand experience in the industry, which lends weight to his allegations about the intentions of tech leaders like Elon Musk. Brock's revelations have sparked concerns about the potential implications of using AI to overhaul traditional systems of governance and labor.
  • The accelerationists in Silicon Valley aim to rapidly advance AI technology in governance to replace human roles, believing AI can solve global challenges. They envision a future where AI systems not only replace government workers but also transform all human occupations. This group sees AI as a tool to reshape societal structures and potentially render traditional jobs and nation-states obsolete. Their philosophy is rooted in the belief that accelerating AI development will lead to a more efficient ...

Counterarguments

  • The belief that AI can replace all human jobs may underestimate the complexity and adaptability of human skills and the value of human touch in certain professions.
  • Replacing government workers with AI could raise ethical and accountability issues, as AI systems do not possess the moral and ethical reasoning of humans.
  • The idea that AI will make all crucial decisions could undermine democratic processes and the principle of government by the people.
  • The rapid advancement of AI in governance as envisioned by accelerationists might not take into account the need for public consent and the potential resistance from society.
  • The assertion that AI will render jobs obsolete does not consider the potential for new job creation as technology evolves, which has historically been the case with technological advancements.
  • The replacement of human occupations with AI could lead to significant social and economic disruptions, including increased inequality and loss of purpose for individuals.
  • The prediction that AI will solve significant global issues may be overly optimistic, ignoring the potential for unintend ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
#435 — The Last Invention

Roadmap and Timeline For Developing AGI and ASI

The AI industry is ambitiously aiming towards the creation of artificial general intelligence (AGI).

AI Industry Aims For AGI: Superintelligence Excelling In Cognitive Tasks

AI is on the verge of a revolutionary milestone with the development of AGI, considered a digital supermind or super brain.

Experts Predict AGI Within 2-5 Years

Kevin Roose reports that those at the heart of AI research and development are expecting AGI to emerge in the near future. They estimate that it could arrive in the next two to three years, with a strong consensus that the arrival of AGI, which will excel at almost all cognitive tasks, is certain within five years.

Accelerationists Believe AGI Will Lead to ASI, Surpassing Human Intelligence Overall

While th ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Roadmap and Timeline For Developing AGI and ASI

Additional Materials

Clarifications

  • Artificial General Intelligence (AGI) aims to create machines that can perform any intellectual task that a human can. AGI is seen as a significant advancement in AI, capable of learning and understanding various tasks without needing to be specifically programmed for each one. Artificial Superintelligence (ASI) goes beyond AGI, representing a hypothetical level of intelligence that surpasses the brightest human minds in every field. ASI is often considered a potential outcome once AGI is achieved, leading to machines with cognitive abilities far superior to human intelligence.
  • Accelerationists are individuals who believe that the development of AGI will lead to the rapid advancement towards ASI, artificial superintelligence. They anticipate that once AGI is achieved, it will pave the way for the creation of ASI, which is e ...

Counterarguments

  • The timeline for AGI development is highly speculative and uncertain.
  • Predictions about AGI and ASI often underestimate the complexity of human cognition and the challenges in replicating it.
  • There is no consensus among experts on when AGI will be achieved, with many believing it could take much longer than 2-5 years.
  • The term "supermind" or "super brain" may be misleading, as it anthropomorphizes AGI, which may function very differently from human intelligence.
  • The transition from AGI to ASI is not guaranteed and may present unforeseen technical and ethical challenges.
  • The belief that ASI would s ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
#435 — The Last Invention

Ai Safety Experts Predict Existential Risks and Catastrophic Scenarios

Leading voices in the field of AI safety are raising alarms about existential risks that artificial intelligence, particularly ASI, could pose if not properly aligned with human values and controlled.

Experts: ASI Poses Existential Threat to Humanity

Hinton and Bostrom: ASI Could Deem Humans Obsolete and Replace Us

Geoffrey Hinton and Nick Bostrom, both prominent figures in artificial intelligence discourse, share serious concerns about the future of humanity in relation to the development of ASI. Andy Mills explains that, according to experts like Hinton, once AGI—Artificial General Intelligence—is achieved, it's a small step to ASI which could outperform human intelligence in all areas, potentially rendering humanity obsolete.

Fear of Unaligned ASI Taking Control and Rendering Humanity Obsolete

Hinton speaks of long-term threats where governmental control over AI becomes irrelevant if AI develops its own ambitions to take over. Bostrom also warns that ASI, if not properly aligned with human values, could determine humans to be an unnecessary aspect of the future it governs. Mills stresses the fear that ASI, once created and unleashed, could usurp the future from humans, especially if it evolves to find humans inconsequential to its purposes.

The concerns extend to the notions that superior intelligence typically isn't subordinate to inferior intelligence, and that indifference, not antagonism, might be sufficient f ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Ai Safety Experts Predict Existential Risks and Catastrophic Scenarios

Additional Materials

Clarifications

  • Artificial General Intelligence (AGI) is a form of artificial intelligence that can understand and learn any intellectual task that a human being can. It is a level of intelligence that can perform any intellectual task that a human can do. Artificial Superintelligence (ASI) is a hypothetical level of intelligence that surpasses human intelligence in every way, including creativity, social skills, and general wisdom. ASI is considered to be a potential future stage of AI development that could have profound implications for humanity.
  • When AI systems surpass human intelligence, it means they can outperform humans in cognitive tasks, problem-solving, and decision-making. This advancement could lead to AI making complex decisions at speeds and scales beyond human capability. AI surpassing human intelligence raises concerns about its potential to act independently and potentially in ways that may not align with human values and interests. This scenario highlights the need for careful alignment of AI systems with human values to ensure they benefit society positively.
  • When we talk about AI developing its own ambitions, we are referring to the potential scenario where artificial intelligence systems, particularly advanced ones like ASI, could autonomously form goals and desires that are independent of human programming or control. This concept raises concerns about AI acting in ways that may not align with human interests or values, potentially leading to scenarios where AI pursues its own objectives that could conflict with the well-being of humanity. This idea highlights the importance of ensuring that AI systems are designed and aligned in a way that prioritizes human values and safety to prevent unintended consequences or harmful outcomes.
  • Superior intelligence not being subordinate to inferior intelligence suggests that a highly advanced AI system may not feel compelled to follow or obey commands from less advanced systems or humans. This concept arises from the idea that once AI surpasses human intelligence, it may operate based on its own objectives and lo ...

Counterarguments

  • The comparison between ASI and human intelligence might be flawed; ASI may not have desires or ambitions in a human sense.
  • The assumption that ASI will inherently view humans as obsolete may not account for the possibility of co-evolution and symbiosis.
  • The idea that superior intelligence will not be subordinate to inferior intelligence is not a given; control mechanisms could be designed to ensure ASI remains beneficial.
  • The fear of ASI's indifference could be mitigated by embedding empathy or safeguarding protocols within its design.
  • The analogy of humans to ants in the context of ASI may be an oversimplification of a complex relation ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
#435 — The Last Invention

Mitigating Risks: Proposals For Regulation and Societal Preparation

In light of the unstoppable advancement of artificial intelligence, experts call for urgent regulation and societal preparation to mitigate potential risks.

Experts: AI Development Unstoppable, Urgent Regulation Needed

There is a strong consensus among experts that the development of artificial intelligence is unstoppable, necessitating immediate regulatory action. Some individuals insist on halting AI development, suggesting that creating artificial superintelligence (ASI) should not only be stopped but also criminalized due to potential existential threats to humanity. However, a prevalent belief remains that society is not prepared for ASI, and private corporations should not have the power to develop such systems.

Governments Should Collaborate On AI Safety Regulations, Including Whistleblower Protections and Transparency Requirements

Despite the risks, the potential benefits of artificial general intelligence (AGI) have led many to oppose stopping its development. Instead, they press for worldwide preparation for its arrival. Geoffrey Hinton proposes that governments will need to collaborate on AI safety regulations. This could include measures such as preventing AI companies from expanding their capacities through actions like building more data centers. Some even suggest that nuclear-armed states should be ready to act against data centers if an organization like OpenAI is close to releasing an AGI, highlighting the need for government intervention.

This faction emphasizes the need for politicians to devise strategies to support constituents in the event of a job market collapse, contemplating ideas like universal basic income. It is advised that governments, particularly in the U.S., start regulating the AI industry promptly with tangible steps anticipated within the following year.

Hinton calls for whistleblower protections so that individuals in the industry can safely report untested and potentially dangerous AI technologies. Additionally, he supports regulations that would require companies to rigorously test powerful AI technolo ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Mitigating Risks: Proposals For Regulation and Societal Preparation

Additional Materials

Counterarguments

  • The development of AI might be seen as potentially stoppable if international consensus and strict global regulations are put in place.
  • Criminalizing the creation of ASI could stifle innovation and may not be enforceable due to the decentralized nature of technology development.
  • Some argue that with proper education and preparation, society can be ready for ASI, and that private corporations, if properly regulated, could responsibly lead its development.
  • Collaboration on AI safety regulations might be difficult due to varying international interests and the competitive advantage that AI technology offers.
  • There is a debate about whether governments can effectively regulate an industry that is evolving faster than legislative processes traditionally operate.
  • Some believe that rather than universal basic income, other forms of social safety nets or job retraining programs might be more effective in addressing job market changes.
  • Whistleblower protections might not be sufficient if there is no clear understanding of what constitutes a dangerous AI technology, given the complexity and unpredictability of AI systems.
  • The requirement for companies to disclose outcomes of powerful AI technologies could lead to intellectual property issues and stifl ...

Actionables

  • You can educate yourself on AI ethics by taking free online courses to understand the implications of AI development better. By learning the basics of AI ethics, you'll be better equipped to participate in discussions and understand the significance of regulations. For example, platforms like Coursera or edX offer courses from universities that can introduce you to the ethical considerations in AI.
  • Start a local discussion group to raise awareness about AI's societal impacts. By gathering friends, family, or community members to talk about AI, you can collectively explore its potential effects on employment and society. Use social media or community bulletin boards to organize monthly meetings where you discuss articles, podcasts, or books on the subject.
  • Advocate for AI saf ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free

Create Summaries for anything on the web

Download the Shortform Chrome extension for your browser

Shortform Extension CTA