Podcasts > Shawn Ryan Show > #292 Brett Adcock - Shawn Ryan Meets a Humanoid Robot

#292 Brett Adcock - Shawn Ryan Meets a Humanoid Robot

By Shawn Ryan Show

In this episode of the Shawn Ryan Show, Brett Adcock shares his journey from launching online ventures to founding multiple tech companies, including Figure AI, which develops humanoid robots. He discusses the capabilities of Figure's latest robot model, which can perform household tasks like unloading dishwashers and folding laundry, and explains how these robots use neural networks to operate autonomously and interact through speech.

The conversation explores how humanoid robots could transform productivity in homes and workplaces, with Adcock describing potential commercial and household applications. He also details his work with other ventures, including Cover's weapon detection technology for schools and Hark's development of advanced AI systems with custom hardware, addressing both the possibilities and challenges of these technologies.

#292 Brett Adcock - Shawn Ryan Meets a Humanoid Robot

This is a preview of the Shortform summary of the Mar 30, 2026 episode of the Shawn Ryan Show

Sign up for Shortform to access the whole episode summary along with additional materials like counterarguments and context.

#292 Brett Adcock - Shawn Ryan Meets a Humanoid Robot

1-Page Summary

Adcock's Entrepreneurial Journey and Background

Growing up on a family farm in rural Illinois, Brett Adcock developed an early interest in computers and entrepreneurship. His parents instilled strong values about controlling one's destiny through business ownership. Throughout his education, Adcock launched various online ventures, leading to his co-founding of Vettori, an AI-driven talent marketplace, which he later sold for $100 million to the Adecco Group.

After selling Vettori, Adcock ventured into more ambitious projects, founding Archer Aviation to develop electric vertical takeoff and landing aircraft, and co-founding Cover, an AI security company focusing on weapon detection technology.

Figure's Humanoid Robot Development

Under Adcock's leadership, Figure AI has made significant strides in developing general-purpose humanoid robots. The company has produced three generations of robots in under four years, with the latest Figure 3 model capable of complex tasks like unloading dishwashers, folding laundry, and operating equipment. Standing at 5'6" and weighing 130 pounds, these robots feature 40 actuated joints and advanced sensory capabilities.

The robots' neural network control system, Helix, enables autonomous operation and natural interaction through speech, with the ability to handle variable real-world tasks and self-diagnose problems. According to Adcock, the robots can work in shifts up to ten hours daily and handle objects weighing up to 40 pounds.

Societal Impact and Applications

Adcock predicts humanoid robots will transform productivity across homes and workplaces, potentially creating an unprecedented "age of abundance." He envisions initial deployment in commercial settings, followed by household adoption. While acknowledging safety and ethical concerns, Adcock maintains that robots will enhance rather than replace human capabilities, freeing people from menial tasks for more meaningful pursuits.

Future AI Developments and Security Technology

Through his new venture Hark, Adcock is developing advanced AI systems with custom hardware and interfaces, aiming to surpass current AI capabilities. Additionally, his work with Cover focuses on preventing school shootings using terahertz radar technology developed by NASA's Jet Propulsion Lab. The system can detect concealed weapons from a distance without privacy concerns, potentially transforming security in schools and public spaces.

1-Page Summary

Additional Materials

Clarifications

  • An AI-driven talent marketplace uses artificial intelligence to match job seekers with employers more efficiently. It analyzes skills, experience, and preferences to recommend the best candidates or job opportunities. The system can automate screening, scheduling, and communication processes. This reduces hiring time and improves fit between employers and employees.
  • Electric vertical takeoff and landing (eVTOL) aircraft are a new type of electric-powered flying vehicle that can take off, hover, and land vertically like a helicopter. They use multiple electric rotors or fans for lift and propulsion, enabling quieter and more efficient urban air mobility. eVTOLs aim to reduce traffic congestion and pollution by providing fast, on-demand air transportation within cities. Their development is significant for advancing sustainable aviation and transforming how people and goods move in urban environments.
  • "Actuated joints" in robots are the movable parts that mimic human joints, allowing the robot to bend, rotate, or move its limbs. Each actuated joint is powered by motors or other mechanisms to enable precise control of movement. Having 40 actuated joints means the robot has a high degree of flexibility and dexterity, similar to a human body. This allows the robot to perform complex tasks requiring fine motor skills.
  • A neural network control system is a type of artificial intelligence modeled after the human brain's structure, using layers of interconnected nodes to process information. It enables robot autonomy by learning from data and experiences to make decisions without explicit programming for every task. This system allows the robot to interpret sensory inputs, adapt to new situations, and perform complex actions independently. By continuously updating its internal model, the robot can improve its performance and handle unexpected challenges.
  • Helix is the name given to the neural network-based control system that manages the robot's movements and decision-making processes. It integrates sensory data to enable the robot to operate autonomously and interact naturally with its environment. The system uses advanced machine learning algorithms to adapt to new tasks and self-diagnose issues. Helix allows the robot to perform complex, variable real-world activities without constant human input.
  • Terahertz radar uses electromagnetic waves between microwave and infrared frequencies to detect objects. It can penetrate clothing and other materials without producing detailed images of a person's body. This allows it to identify concealed weapons while preserving individual privacy. Unlike X-rays, terahertz waves are non-ionizing and safe for humans.
  • NASA's Jet Propulsion Laboratory (JPL) develops advanced sensing technologies, including terahertz radar, for space exploration and Earth observation. Terahertz radar uses electromagnetic waves between microwave and infrared frequencies to detect objects through materials like clothing or walls. JPL's expertise in this technology enables precise, non-invasive detection of concealed items. This technology is adapted for security applications, such as weapon detection, due to its ability to see through obstructions without compromising privacy.
  • Ethical concerns about humanoid robots include privacy invasion, job displacement, and decision-making accountability. Safety issues involve physical harm from robot malfunctions or misuse. Ensuring robots act within human ethical norms and legal frameworks is challenging. Continuous monitoring and regulation are needed to mitigate risks.
  • "Custom hardware and interfaces" in AI development refers to designing specialized computer components and user interaction systems tailored specifically for AI tasks. This hardware can include processors optimized for machine learning calculations, enabling faster and more efficient AI performance than general-purpose devices. Custom interfaces allow AI systems to communicate and interact with users or other machines in ways best suited to their functions. Together, they enhance the AI's capabilities beyond what standard technology can achieve.
  • An "age of abundance" refers to a future period where advanced technology, like humanoid robots, produces goods and services so efficiently that scarcity is greatly reduced. This abundance means basic needs and many wants can be met easily and affordably for most people. It could lead to significant economic and social changes, including more leisure time and new opportunities for creativity and innovation. The concept is rooted in the idea that automation can free humans from repetitive labor, increasing overall wealth and quality of life.

Counterarguments

  • While Adcock emphasizes that robots will enhance rather than replace human capabilities, there is ongoing concern among labor experts and economists that widespread adoption of humanoid robots could lead to significant job displacement, especially in sectors reliant on manual or repetitive labor.
  • The claim that robots will free people from menial tasks for more meaningful pursuits assumes that displaced workers will have access to opportunities for retraining or alternative employment, which may not be universally available.
  • The effectiveness and safety of Cover's weapon detection technology, while promising, may face challenges in real-world deployment, such as false positives, system reliability, and integration with existing security protocols.
  • The assertion that terahertz radar technology does not raise privacy concerns may be contested by privacy advocates, who argue that any form of surveillance technology in public spaces warrants careful scrutiny and oversight.
  • The vision of an "age of abundance" brought about by robotics and AI is optimistic, but some critics argue that technological advances alone do not guarantee equitable distribution of benefits, and may exacerbate existing social and economic inequalities.
  • The rapid pace of development and deployment of advanced AI and robotics raises ethical questions about oversight, transparency, and accountability, which may not be fully addressed by current industry practices.

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
#292 Brett Adcock - Shawn Ryan Meets a Humanoid Robot

Adcock's Entrepreneurial Journey and Background

Adcock's Rural Farm Upbringing Sparked an Early Interest in Computers

Brett Adcock grows up in Moweaqua, a small town in Central Illinois with around 700 people, on a family farm that raises corn and soybeans. He comes from a family of third-generation farmers. His parents instill strong entrepreneurial values in him from a young age, emphasizing the importance of controlling his own destiny by running his own business, as farming itself is highly entrepreneurial. Adcock learns early that to find success and independence, he will likely need to leave farming and start his own venture.

He becomes interested in computers and technology while still young, constantly building things on the farm and eventually gravitating toward software and the internet. A visual learner with a love for science and mathematics, Adcock enjoys rebuilding physical items and computers, always seeking opportunities to work on tangible projects he can see and touch.

Adcock Founded Startups In School, Learning Entrepreneurship

Throughout high school and college, Adcock launches multiple startups and online projects, such as selling products, dropshipping, retail electronics, and lead generation marketing. He views these ventures mainly as ways to make money since he does not grow up with financial resources, but also as a way to enjoy creating things and exercising control over his own future. This early immersion in entrepreneurship sets the stage for his future business endeavors.

Adcock Co-founded Vettori, an AI Marketplace, Sold For $100 Million

After college, Adcock moves to New York and continues working on software startups. Shortly after graduating, he co-founds Vettori, an AI-driven talent marketplace, in 2012.

Vettori Aimed to Use Algorithms to Match Job Seekers and Employers, Disrupting Headhunting

Vettori's purpose is to disrupt the traditional headhunting industry, which Adcock describes as a "boys' club" lacking meritocracy and shrouded in inefficiency and opacity. Noting his own frustrations with job applications and recruiter interactions, he focuses on building a platform that uses algorithms—and eventually, AI—to understand both job seekers’ and employers’ preferences and automatically make high-quality matches at scale. The goal is to enable companies to connect with candidates efficiently, replacing the manual and exclusive process typical of the headhunting industry.

Vettori starts with tech talent in the United States and eventually expands to operate in nearly 20 cities globally. At its peak, the platform processes 20,000 to 30,000 interview requests weekly with no human intervention. The business model centers on subscription revenue from major companies, including banks, startups, and tech firms.

Despite early difficulties, including going into debt in 2015, the business "hockey sticks" in growth as the team solves operational challenges. In 2017 or 2018, after about six years, Vettori—now thriving—draws interest from the world’s largest recruiting firm (the Adecco Group), which acquires the company for a deal valued at approximately $100 million. Adcock notes this was a fitting time to move on, feeling proud of what the company accomplished and ready for a new challenge.

After Selling Vettori, Adcock Pursued Bold Ventures in Robotics, Aviation, and Security

With the sale of Vettori complete, Adcock takes ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Adcock's Entrepreneurial Journey and Background

Additional Materials

Counterarguments

  • While Adcock’s entrepreneurial journey is impressive, his success may not be easily replicable for others from similar rural or underprivileged backgrounds, as access to technology, education, and networks can vary widely.
  • The narrative emphasizes individual drive and self-education, but it may understate the importance of external factors such as timing, market conditions, and potential support from mentors or investors.
  • The disruption of traditional headhunting by Vettori’s AI-driven platform could have contributed to job losses or reduced opportunities for human recruiters, raising questions about the broader social impact of such automation.
  • The claim that Vettori’s algorithmic matching is more meritocratic than traditional recruiting could be challenged, as algorithmic systems can perpetuate or even amplify existing biases if not carefully designed and monitored.
  • The focus on rapid scaling and acquisition as markers of success may overlook the long-term sustainability or societal value of the businesses created.
  • Adc ...

Actionables

  • you can create a personal challenge to identify and automate one repetitive task in your daily routine using simple tools like calendar rules, email filters, or basic spreadsheet formulas, helping you experience the benefits of efficiency and freeing up time for more meaningful activities.
  • a practical way to explore new fields is to set aside one hour each week to read or watch beginner-friendly resources on a technology or industry you know little about, then jot down three questions or ideas that come to mind, building curiosity and a habit of self-driven learning.
  • you can practice building a diverse t ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
#292 Brett Adcock - Shawn Ryan Meets a Humanoid Robot

The Development and Capabilities of Figure's Humanoid Robots

Figure AI, under the leadership of Brett Adcock, is at the frontier of creating general-purpose humanoid robots designed for labor automation in homes and commercial spaces. Adcock founded the company with the goal of developing highly capable, affordable, and autonomous robots, solving technical challenges that had kept humanoid robots from viable real-world use.

Adcock Founded Figure Ai to Build Electric Humanoid Robots For Automation

In 2020, Brett Adcock launched Figure AI amid widespread skepticism about the feasibility of humanoid robots in practical settings. At the time, the field was dominated by large, hand-coded hydraulic robots ill-suited for domestic use or cost-effective deployment. Adcock recognized the need for low-cost, electric humanoids capable of meaningful work, guided by advanced AI, rather than task-specific code. He self-funded the company during its formative period, assembling specialized teams in hardware and software, and building from the ground up robust actuators, battery systems, motor controllers, sensors, and embedded systems. Adcock’s vision is a robot that mimics human movement and reasoning—walking, grasping, and understanding objects—enabling intuitive user interaction through speech and touch.

Dynamic, Agile Robots Mimic Human Movements

Adcock’s guiding principle is to give AI a body, engineering robots that achieve human-level dexterity and balance. These robots, about five feet six inches tall and weighing roughly 130 pounds, are equipped with 40 actuated joints (degrees of freedom), advanced rotor and sensor designs, and tactile sensors—especially in the fingertips—allowing sensitive manipulation of objects. Cameras in the robot’s palms enable visual monitoring of the grasp, matching the coordinated movement and balance of a human. The robots are engineered for dynamic tasks such as walking over uneven surfaces, recovering from pushes, and handling a range of weights, demonstrating better balance than many people.

Iterated Through Three Humanoid Robot Prototypes Rapidly In Years

Figure AI has achieved rapid prototyping, developing three robot generations in under four years. The company launched its first walking Figure 1 prototype in under twelve months, quickly followed by Figure 2—which saw real-world testing at BMW assembly lines—and the recently unveiled Figure 3, their most advanced model. Each new generation brought improvements: reduced mass, more powerful and efficient actuators, denser sensor arrays, integrated GPU and battery systems in the torso, and fifth-generation hands with advanced tactile and visual feedback.

Figure 3 robots display remarkable practical capabilities: they perform complex household tasks including unloading dishwashers, folding laundry, operating equipment such as Keurig coffee machines, feeding pets, and checking mail. In lab and real-world factory settings, these robots demonstrate durability and robust operation, running in shifts of up to ten hours daily for months without major faults. In industrial tests, Figure’s robots have worked alongside humans, accomplishing repetitive manufacturing tasks and handling objects up to 40 pounds. They routinely run their own diagnostic routines and calibration, autonomously performing "burpees" and other physical checks. Figure is moving toward producing a robot roughly every ninety minutes, signaling a leap in scalability.

Figure 3 Robot Performs Tasks Like Unloading Dishes, Folding Laundry, and Operating Equipment

The robots’ capabilities extend to a wide variety of complex domestic and logistics tasks. In Adcock’s own home, robots unload dishwashers, fold towels and shirts, move laundry from baskets to washers, pick up packages, and perform basic logistics work. The approach is modular: robots are trained with data on one task, such as folding towels, and can then be quickly retrained for logistics or other work, all without changing the hardware. They adapt to new types of clothes, objects, and packages as required. The robots also feature customizable “soft wraps”—wearable clothes that can be easily fitted or removed without tools.

In logistics and office environments, robots operate in 24/7 autonomous shifts. They communicate with each other to manage recharging cycles and replace one another during charging or recovery from faults—such as limping to a station if a joint fails. This ability to scale, self-diagnose, and self-correct demonstrates not just physical robustness, but autonomy at the fleet level.

Neural Network Control System Enables Humanoid Robots to Operate Autonomously

The leap in Figure’s robotics advances comes from replacing hand-coded programming with neural network–driven control. Traditional robotics methods struggled with generalization and adaptability, especially in dynamic, uns ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

The Development and Capabilities of Figure's Humanoid Robots

Additional Materials

Counterarguments

  • While Figure AI claims rapid prototyping and deployment, the text does not provide independent verification of the robots’ real-world performance or widespread adoption outside controlled environments.
  • The assertion that Figure’s robots have better balance than many humans is subjective and lacks quantitative comparison or peer-reviewed evidence.
  • The text highlights the robots’ ability to perform household and industrial tasks, but does not address the economic feasibility or cost-effectiveness of deploying such robots at scale compared to existing automation or human labor.
  • The modular retraining approach is described as efficient, but the text does not discuss potential limitations in generalization or the need for significant task-specific data to achieve reliable performance.
  • The claim of 24/7 autonomous operation and robust self-diagnosis is not accompanied by data on failure rates, maintenance requirements, or long-term reliability in diverse, uncontrolled settings.
  • The focus on neural network–driven control as a replacement for hand-coded programming does not address ongoing challenges in AI safety, interpretability, and the risk of unpredictable behavior ...

Actionables

  • you can create a personal log of repetitive or time-consuming tasks at home or work and brainstorm which ones could be automated or improved with robotics, helping you identify where future humanoid robots could make the biggest impact in your daily routine (for example, tracking how much time you spend on laundry, dishwashing, or sorting mail, and imagining how a robot could handle these for you).
  • a practical way to prepare for interacting with advanced robots is to practice giving clear, concise verbal instructions for everyday tasks, which helps you get comfortable with the kind of speech-based communication these robots will use (for example, try explaining to a friend or family member how to fold a shirt or make coffee using only your voice, focusing on clarity and ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
#292 Brett Adcock - Shawn Ryan Meets a Humanoid Robot

Potential Societal Impacts and Applications of Humanoid Robots

Brett Adcock outlines a vision where humanoid robots usher in a transformative era for productivity, home life, and industry, while also underscoring the safety, ethical, and technical hurdles that must be addressed for widespread adoption.

Adcock Predicts Humanoid Robots Will Enhance Productivity and Reduce Manual Labor in Homes and Workplaces

Adcock asserts that the large-scale deployment of humanoid robots will be “crazy cool,” but will require millions of units to truly make a societal impact—a process in its earliest stages now. He describes these robots as “little mini humans” capable of using computers, machines, and performing human-like work, predicting the “greatest increase in productivity we’ve ever seen in our lifetime.” According to Adcock, robots will significantly reduce the prices of goods and services, ushering in an unprecedented “age of abundance.”

He forecasts that businesses will be the first to encounter humanoid robots because task variability and complexity are lower in industrial and commercial settings. In these environments, work is broken into well-defined steps—making it easier for robots to function, much like highway driving is simpler for autonomous vehicles than city driving. This commercial rollout is likely to scale more quickly, fueled by the fact that half of global GDP comes from human labor and three billion people are in the workforce—a massive market for robotic efficiency. Adcock notes robots can command much higher prices in these settings, offering major productivity and efficiency benefits.

Home deployment is the ultimate goal, even though it's more technically challenging. Adcock envisions humanoid robots learning users’ homes and preferences instantly, much like showing a houseguest around for the first time. He describes a robot that semantically understands and remembers instructions, adapting to unique household routines through communication. Ultimately, he predicts that “in our lifetime, we will be fortunate enough for every human to have a humanoid,” equating their future ubiquity to that of cars or phones.

Adcock Envisions a Future Where Robots Handle Tasks, Freeing Humans For Higher-Level Work

Adcock imagines a world where robots handle all forms of physical and digital “busy work”—ranging from washing dishes and preparing breakfast to scheduling appointments and managing digital tasks. He describes wanting an AI “operating system” to run daily logistics, delegating everything from paying bills to arranging meetings or booking services. He anticipates this level of automation within the next 24 months, projecting that people will no longer need to handle menial labor, whether physical chores or computer-based tasks.

This shift will make manual work entirely optional. Those who enjoy gardening or mowing the lawn may still do so, but the drudgery will cease being obligatory, allowing humans to focus on time with family, creative interests, or cerebral pursuits. Adcock frames this as a fundamental reprioritization of human labor toward fulfillment over necessity. In the commercial sector, humanoid robots will proliferate in environments like manufacturing, healthcare, and construction, amplifying productivity and efficiency at massive scales.

Adcock Acknowledges Humanoid Robot Risks but Maintains Benefits Will Outweigh Challenges

Adcock is candid about the technical, ethical, and safety challenges involved in bringing humanoid robots into homes and workplaces. He highlights safety as the longest and most significant hurdle, especially in household settings where people must trust robots around their children. Ensuring mechanical safety is essential—robots must be safe around hazards like candles or boiling water, and must not cause injury during interaction. Adcock likens th ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Potential Societal Impacts and Applications of Humanoid Robots

Additional Materials

Clarifications

  • Semantic understanding in robots refers to their ability to comprehend the meaning behind words, actions, and contexts rather than just processing raw data. This allows robots to interpret instructions in a human-like way, recognizing nuances and intentions. By understanding semantics, robots can remember user preferences and adjust their behavior accordingly over time. This capability enables personalized interactions and more effective assistance in dynamic, real-world environments.
  • "General-purpose functionality" means a robot can perform a wide variety of tasks across different environments without needing specialized programming for each one. It implies adaptability, allowing the robot to understand and respond to new, unpredictable situations like a human would. Achieving this requires advanced artificial intelligence, sensory perception, and decision-making capabilities. This versatility is crucial for robots to be useful in diverse real-world settings, from homes to workplaces.
  • Industrial and commercial tasks often follow standardized procedures and predictable workflows designed for efficiency. These environments use uniform tools, equipment, and layouts, reducing unexpected variations. In contrast, homes vary widely in design, objects, and human behaviors, creating complex, dynamic settings. This unpredictability makes it harder for robots to adapt and perform consistently in household tasks.
  • Commercial settings have structured, predictable tasks with clear steps, making it easier for robots to operate reliably. Highway driving is simpler for autonomous vehicles because it involves steady speeds and fewer variables compared to complex city environments. Both scenarios involve controlled conditions that reduce unexpected challenges. This predictability allows earlier and safer deployment of technology.
  • An AI operating system refers to an integrated platform that autonomously manages a wide range of daily tasks across physical and digital domains, rather than isolated functions. Unlike current AI applications that perform specific tasks (like voice assistants or recommendation systems), it coordinates multiple activities seamlessly, adapting to user preferences and contexts. It acts as a central controller, delegating chores, scheduling, and managing resources in real time. This holistic approach enables continuous, personalized support beyond the capabilities of standalone AI tools.
  • Household hazards like candles or boiling water pose risks because they require robots to detect and respond to dynamic, potentially dangerous situations. Ensuring safety involves advanced sensors and real-time processing to recognize hazards and avoid accidents. Ethical challenges include programming robots to prioritize human safety and make decisions in emergencies without causing harm. Developers must also address privacy concerns and ensure robots respect user autonomy while operating safely.
  • The analogy means that humanoid robots must meet very high safety standards before being trusted around people, especially children. Aircraft safety is rigorously tested and regulated to minimize risks despite inherent dangers. Similarly, robots need extensive testing to ensure they do not cause harm during everyday interactions. This sets a benchmark for public trust and acceptance of robots in homes.
  • New technologies often face initial public fear due to unfamiliar risks. Over time, safety standards improve through regulation, testing, and experience. Society gradually accepts these technologies as benefits outweigh dangers. This pattern has occurred with cars and airplanes, which were once seen as risky but are now integral to daily life.
  • "Scaling" in robot deployment means increasing the number of robots produced and used to meet large demand. It involves expanding manufacturing capac ...

Counterarguments

  • The claim that humanoid robots will significantly lower the prices of goods and services assumes that cost savings will be passed on to consumers, which is not guaranteed; companies may retain increased profits instead.
  • Large-scale deployment of humanoid robots could lead to significant job displacement, especially for workers in manual and service industries, potentially increasing unemployment and social inequality.
  • The technical challenges of achieving true general-purpose functionality and safety in diverse, unstructured home environments may be underestimated; progress in robotics has historically been slower and more difficult than anticipated.
  • The prediction that every human will have a humanoid robot in their lifetime may overlook affordability and accessibility issues, particularly for lower-income populations and developing countries.
  • The assertion that manual labor will become optional does not account for the value and satisfaction some people derive from physical work, nor does it address the potential loss of purpose or identity for those whose livelihoods depend on such labor.
  • The analogy to the ubiquity of cars or phones may not be appropriate, as the complexity, cost, and safety concerns of humanoid robots are significantly higher.
  • The focus on safety standards comparable to aircraft may not fully address the unique ethical and psychological concerns associated with ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
#292 Brett Adcock - Shawn Ryan Meets a Humanoid Robot

Adcock's Next-Gen Ai Devices and Interfaces Plans

Adcock Launched Hark, an Ai Lab for Ai-powered Devices and Interfaces

In late 2025, Brett Adcock launches Hark, a new AI lab self-funded with $100 million, aiming to build what he calls "human-centric AI." Hark is not just developing standard assistants or chatbots but aspires to create groundbreaking multimodal AI systems. These systems are intended to surpass current AI capabilities by seamlessly integrating with and enhancing human capabilities. Adcock notes that Hark already has AI systems in the lab capable of using computers like a human, making calls, managing schedules, and performing tasks upon request. Hark’s team includes top AI engineers and the lead designer from recent iPhones, suggesting an emphasis on both advanced technology and user-focused interface design.

Adcock: The Key to Unlocking Ai's Potential Is Pairing Advanced Models With Purpose-Built Hardware and Interfaces

Adcock argues that today's AI, like chatbots from Gemini or ChatGPT, fall short of true intelligence and personal capability—they can't remember interactions, intuit context, see the world, or interface fluently with the internet and other tools. He says current systems still feel like using an incognito browser: impersonal, forgetful, and limited. Adcock envisions AI that can interact naturally with users—listening, speaking, seeing, and understanding them personally—much like the fictional AI ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Adcock's Next-Gen Ai Devices and Interfaces Plans

Additional Materials

Clarifications

  • Multimodal AI systems process and understand multiple types of data simultaneously, such as text, images, audio, and video. This allows them to interpret complex information more like humans do, integrating different senses. For example, a multimodal AI can analyze a photo while understanding spoken instructions about it. This capability enables richer, more natural interactions and better decision-making.
  • Human-centric AI focuses on designing artificial intelligence that deeply understands and adapts to individual human needs, emotions, and contexts. Unlike current AI, which often operates as generic tools, human-centric AI aims to create personalized, intuitive interactions that feel natural and supportive. It emphasizes seamless integration with human activities, enhancing rather than replacing human decision-making and creativity. This approach requires AI to be empathetic, context-aware, and capable of long-term relationship building with users.
  • AI systems that can "use computers like a human" can navigate software, websites, and applications autonomously, performing tasks without explicit programming for each action. This ability enables AI to handle complex workflows, adapt to new interfaces, and interact with digital environments dynamically. It moves beyond simple command execution to understanding context and making decisions like a human user. This capability is crucial for creating AI that can assist with real-world tasks seamlessly and intuitively.
  • Top AI engineers are experts who design and build advanced artificial intelligence systems, ensuring they function effectively and innovatively. The lead designer from recent iPhones is likely responsible for the user experience and interface design, making technology intuitive and visually appealing. Their involvement signals a focus on both cutting-edge AI technology and seamless, user-friendly device interaction. Combining these skills helps create AI devices that are powerful yet easy and natural for people to use.
  • Current AI systems like Gemini or ChatGPT process each interaction independently, lacking persistent memory of past conversations. They struggle to fully grasp nuanced context or user intent beyond the immediate input. These AIs cannot directly access or control external devices, software, or real-world environments. Their interaction is limited to text-based input and output without natural multimodal capabilities like vision or speech integration.
  • The analogy compares current AI to an incognito browser because both do not retain memory of past interactions. Incognito mode prevents storing browsing history or cookies, making each session isolated. Similarly, current AI systems often lack long-term memory, so they cannot remember previous conversations or personalize responses over time. This limits the AI's ability to build a continuous, context-aware relationship with users.
  • Jarvis is a highly advanced AI assistant featured in the Iron Man movies and comics. It can understand complex commands, manage Tony Stark’s technology, and interact naturally through speech and context awareness. Jarvis demonstrates seamless integration with various devices and real-time problem-solving abilities. It represents an ideal of AI that is deeply personalized and proactive.
  • Current hardware ...

Counterarguments

  • The claim that current AI systems like Gemini or ChatGPT cannot remember interactions or intuit context is not entirely accurate; these systems have made significant progress in context retention and personalization within session limits and through user profiles.
  • The assertion that purpose-built hardware is necessary for advanced AI experiences overlooks the rapid evolution and adaptability of existing consumer devices, which continue to integrate new AI capabilities through software updates.
  • Replacing smartphones and computers entirely with new AI devices may face significant user resistance due to entrenched habits, ecosystem lock-in, and the versatility of current devices.
  • The vision of AI that "deeply knows users" raises privacy and ethical concerns, as increased personalization often requires extensive data collection and proc ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
#292 Brett Adcock - Shawn Ryan Meets a Humanoid Robot

Adcock's Work on Cover, the Weapon Detection Technology

After Selling Vettori, Adcock Pursued a Solution to School Shootings With Sensors

After selling his previous company Vettori, Brett Adcock became obsessed with addressing the issue of school shootings. Noting that the number of school shooting events in the United States surged from 30-40 incidents per year to about 300 over the last decade, he recognized the urgency of developing a scalable solution. Adcock described reading research on the problem and stumbling onto terahertz radar, also known as millimeter wave technology. This form of technology, which operates at very high-frequency radio bands (200-400 gigahertz), originally emerged from NASA’s Jet Propulsion Lab (JPL). It had been developed for detecting concealed threats at a standoff distance during the Iraq and Afghanistan wars but was mothballed after funding ended. Adcock contacted the JPL scientists, visited to see the demo machine, and found that it could reliably detect hidden weapons underneath clothing from several meters away using a radio frequency setup.

Adcock initially paused work on Cover to build Archer, but was inspired to return to the problem after a conversation with an investor—who, as a parent, urged him to fulfill his “fiduciary duty” as the technology’s developer. The urgency was compounded by Adcock’s personal experience as his daughter prepared to start first grade, and he noted how schools are vulnerable due to open access. Adcock spun the technology out of Caltech’s Jet Propulsion Lab, assembled the original development team, and established Cover’s main office in Pasadena, near JPL. He self-funded the project and reported that their prototype worked by the previous year, with a goal to beta test in schools by year’s end.

Adcock pointed out that while only a small percentage of students are reported for bringing guns to school—due to the severe consequences—he believes the absolute number is likely in the tens or hundreds of thousands each year. The infrequency of reporting masks the true scale of the problem. Alongside gun incidents, there are also hundreds of knife stabbings annually. Adcock maintains that new technology is needed because traditional tools such as CCTV and metal detectors are reactive or intrusive and cannot effectively prevent shootings.

Cover's System Uses Radio Waves to Create 3d Images Detecting Concealed Weapons Like Guns or Knives

Cover’s system is based on terahertz imaging, deploying radio frequency waves at very high frequencies to scan for concealed weapons. Unlike metal detectors or wands, the system operates at a distance—able to scan individuals at ten, twenty, or thirty meters as they walk through school entrances. It uses the returning radio waves to build what Adcock calls a “point cloud”—a 3D image constructed from radio frequency data. This form of detection generates a visual representation similar to a camera image, revealing objects such as guns or knives even if they are under clothes or inside backpacks.

Adcock explains that high-frequency radio waves are projected like radar. When the radio wave encounters a solid object, such as a weapon, it bounces back at a different rate than it would through human tissue or fabric. The system uses beam forming and advanced processing to reconstruct detailed 2D and 3D images of concealed items. AI analyzes these point clouds and images to determine the presence of weapons, distinguish between benign and potentially dangerous objects, and avoid false positives (such as mistaking a crayon box for a gun). The technology can detect a variety of weapons, including knives, and can identify items in waistbands, pockets, or bags, addressing the most common ways students carry guns into schools.

Adcock stresses the importance of keeping the technology affordable and scalable. Originally, some specialized hardware components cost tens of thousands of dollars apiece, but Cover succeeded in miniaturizing the components into custom chips that now cost just about $7 each. As a result, the hardware cost dropped by about 90%, making the system feasible for widespread deployment in both public and private schools, which are often already funded for security infrastructure.

Cover’s technology also incorporates camera systems, and in critical areas, may include microphones and higher frame rates to enhance situational awareness. The AI stack provides semantic understanding—evaluating if someone entering belongs there, if their behavior is unusual, or if the ti ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Adcock's Work on Cover, the Weapon Detection Technology

Additional Materials

Clarifications

  • Terahertz radar uses electromagnetic waves with frequencies between microwaves and infrared light, typically 0.1 to 10 terahertz. These waves can penetrate materials like clothing and plastics but are absorbed by water and metals, enabling detection of concealed objects. Millimeter wave technology operates at slightly lower frequencies (30-300 gigahertz) and is commonly used in security scanners. Both technologies provide high-resolution imaging without harmful radiation, making them suitable for safe, non-invasive scanning.
  • The Jet Propulsion Laboratory (JPL) is a NASA research center specializing in robotic space missions and advanced technology development. It creates cutting-edge instruments and systems, often pioneering innovations later adapted for other uses. JPL's expertise includes high-frequency radar and imaging technologies, which can detect objects remotely. Its work bridges space exploration and practical applications like security and surveillance.
  • Terahertz imaging uses electromagnetic waves between microwave and infrared frequencies to penetrate clothing but reflect off solid objects. These waves interact differently with materials based on their molecular composition, allowing the system to distinguish weapons from human tissue or fabric. The reflected signals are captured and processed to create detailed images revealing concealed items. This method is safe, non-ionizing, and effective for detecting hidden threats without physical contact.
  • A "point cloud" is a collection of data points in space representing the external surface of an object. Each point corresponds to a specific location where the radio waves reflected off an object and returned to the sensor. By measuring the distance and angle of these reflections, the system maps the shape and position of concealed items in three dimensions. This spatial data is then processed to create a detailed 3D image of the object’s form.
  • Beam forming is a signal processing technique that directs the transmission or reception of radio waves in specific directions to improve detection and resolution. It uses multiple antennas to create constructive interference patterns, focusing energy toward a target area. Advanced processing involves algorithms that analyze the reflected signals to extract detailed spatial information and filter out noise. Together, these methods enhance the clarity and accuracy of radar images by isolating objects from background clutter.
  • Ionizing radiation carries enough energy to remove tightly bound electrons from atoms, potentially causing cellular damage and increasing cancer risk. Non-ionizing radiation lacks this energy and cannot break chemical bonds or damage DNA. Examples of ionizing radiation include X-rays and gamma rays, while radio waves and terahertz waves are non-ionizing. This distinction matters because non-ionizing radiation is generally considered safe for continuous exposure, especially in environments with children.
  • Miniaturizing hardware components allows devices to be smaller, lighter, and easier to install in various environments. Reducing costs makes advanced technology accessible to more schools and public venues, enabling widespread adoption. Lower expenses also facilitate maintenance, upgrades, and scalability without requiring large budgets. This combination is crucial for practical, large-scale deployment of security systems like Cover’s.
  • AI analyzes point clouds by using machine learning models trained on thousands of 3D scans of various objects to recognize shapes and patterns. It extracts features like size, contours, and surface texture from the 3D data to classify objects. The AI compares these features against known profiles of weapons and everyday items to identify potential threats. It continuously improves accuracy by learning from new data and feedback to reduce false positives.
  • Semantic understanding in AI refers to the ability of a system to interpret the meaning and context of data, not just raw signals. In security systems, it enables AI to recognize behaviors, objects, or situations that indicate potential threats by analyzing patterns and context. This helps differentiate between normal and suspicious activities, improving accuracy and reducing false alarms. Essentially, it allows the system to "understand" what it observes beyond simple detection.
  • "Fiduciary duty" refers to a legal and ethical obligation to act in the best interest of others, such as investors or stakeholders. I ...

Counterarguments

  • The effectiveness of terahertz imaging in real-world, high-traffic school environments has not been independently validated at scale, and there may be challenges with false positives or missed detections.
  • While the technology is described as non-intrusive, the presence of constant surveillance and scanning could still contribute to a sense of being monitored, potentially affecting school climate and student well-being.
  • The claim that the technology does not raise privacy concerns may not account for broader debates about surveillance, data retention, and the use of AI in public spaces.
  • The reduction in hardware costs does not necessarily address ongoing expenses related to installation, maintenance, software updates, and staff training, which could be significant for underfunded schools.
  • There is limited evidence that technology alone can address the root causes of school violence, which are complex and may require broader social, mental health, and policy interventions.
  • The deployment of such systems in public spaces co ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free

Create Summaries for anything on the web

Download the Shortform Chrome extension for your browser

Shortform Extension CTA