In this episode of the Shawn Ryan Show, Brett Adcock shares his journey from launching online ventures to founding multiple tech companies, including Figure AI, which develops humanoid robots. He discusses the capabilities of Figure's latest robot model, which can perform household tasks like unloading dishwashers and folding laundry, and explains how these robots use neural networks to operate autonomously and interact through speech.
The conversation explores how humanoid robots could transform productivity in homes and workplaces, with Adcock describing potential commercial and household applications. He also details his work with other ventures, including Cover's weapon detection technology for schools and Hark's development of advanced AI systems with custom hardware, addressing both the possibilities and challenges of these technologies.

Sign up for Shortform to access the whole episode summary along with additional materials like counterarguments and context.
Growing up on a family farm in rural Illinois, Brett Adcock developed an early interest in computers and entrepreneurship. His parents instilled strong values about controlling one's destiny through business ownership. Throughout his education, Adcock launched various online ventures, leading to his co-founding of Vettori, an AI-driven talent marketplace, which he later sold for $100 million to the Adecco Group.
After selling Vettori, Adcock ventured into more ambitious projects, founding Archer Aviation to develop electric vertical takeoff and landing aircraft, and co-founding Cover, an AI security company focusing on weapon detection technology.
Under Adcock's leadership, Figure AI has made significant strides in developing general-purpose humanoid robots. The company has produced three generations of robots in under four years, with the latest Figure 3 model capable of complex tasks like unloading dishwashers, folding laundry, and operating equipment. Standing at 5'6" and weighing 130 pounds, these robots feature 40 actuated joints and advanced sensory capabilities.
The robots' neural network control system, Helix, enables autonomous operation and natural interaction through speech, with the ability to handle variable real-world tasks and self-diagnose problems. According to Adcock, the robots can work in shifts up to ten hours daily and handle objects weighing up to 40 pounds.
Adcock predicts humanoid robots will transform productivity across homes and workplaces, potentially creating an unprecedented "age of abundance." He envisions initial deployment in commercial settings, followed by household adoption. While acknowledging safety and ethical concerns, Adcock maintains that robots will enhance rather than replace human capabilities, freeing people from menial tasks for more meaningful pursuits.
Through his new venture Hark, Adcock is developing advanced AI systems with custom hardware and interfaces, aiming to surpass current AI capabilities. Additionally, his work with Cover focuses on preventing school shootings using terahertz radar technology developed by NASA's Jet Propulsion Lab. The system can detect concealed weapons from a distance without privacy concerns, potentially transforming security in schools and public spaces.
1-Page Summary
Brett Adcock grows up in Moweaqua, a small town in Central Illinois with around 700 people, on a family farm that raises corn and soybeans. He comes from a family of third-generation farmers. His parents instill strong entrepreneurial values in him from a young age, emphasizing the importance of controlling his own destiny by running his own business, as farming itself is highly entrepreneurial. Adcock learns early that to find success and independence, he will likely need to leave farming and start his own venture.
He becomes interested in computers and technology while still young, constantly building things on the farm and eventually gravitating toward software and the internet. A visual learner with a love for science and mathematics, Adcock enjoys rebuilding physical items and computers, always seeking opportunities to work on tangible projects he can see and touch.
Throughout high school and college, Adcock launches multiple startups and online projects, such as selling products, dropshipping, retail electronics, and lead generation marketing. He views these ventures mainly as ways to make money since he does not grow up with financial resources, but also as a way to enjoy creating things and exercising control over his own future. This early immersion in entrepreneurship sets the stage for his future business endeavors.
After college, Adcock moves to New York and continues working on software startups. Shortly after graduating, he co-founds Vettori, an AI-driven talent marketplace, in 2012.
Vettori's purpose is to disrupt the traditional headhunting industry, which Adcock describes as a "boys' club" lacking meritocracy and shrouded in inefficiency and opacity. Noting his own frustrations with job applications and recruiter interactions, he focuses on building a platform that uses algorithms—and eventually, AI—to understand both job seekers’ and employers’ preferences and automatically make high-quality matches at scale. The goal is to enable companies to connect with candidates efficiently, replacing the manual and exclusive process typical of the headhunting industry.
Vettori starts with tech talent in the United States and eventually expands to operate in nearly 20 cities globally. At its peak, the platform processes 20,000 to 30,000 interview requests weekly with no human intervention. The business model centers on subscription revenue from major companies, including banks, startups, and tech firms.
Despite early difficulties, including going into debt in 2015, the business "hockey sticks" in growth as the team solves operational challenges. In 2017 or 2018, after about six years, Vettori—now thriving—draws interest from the world’s largest recruiting firm (the Adecco Group), which acquires the company for a deal valued at approximately $100 million. Adcock notes this was a fitting time to move on, feeling proud of what the company accomplished and ready for a new challenge.
With the sale of Vettori complete, Adcock takes ...
Adcock's Entrepreneurial Journey and Background
Figure AI, under the leadership of Brett Adcock, is at the frontier of creating general-purpose humanoid robots designed for labor automation in homes and commercial spaces. Adcock founded the company with the goal of developing highly capable, affordable, and autonomous robots, solving technical challenges that had kept humanoid robots from viable real-world use.
In 2020, Brett Adcock launched Figure AI amid widespread skepticism about the feasibility of humanoid robots in practical settings. At the time, the field was dominated by large, hand-coded hydraulic robots ill-suited for domestic use or cost-effective deployment. Adcock recognized the need for low-cost, electric humanoids capable of meaningful work, guided by advanced AI, rather than task-specific code. He self-funded the company during its formative period, assembling specialized teams in hardware and software, and building from the ground up robust actuators, battery systems, motor controllers, sensors, and embedded systems. Adcock’s vision is a robot that mimics human movement and reasoning—walking, grasping, and understanding objects—enabling intuitive user interaction through speech and touch.
Adcock’s guiding principle is to give AI a body, engineering robots that achieve human-level dexterity and balance. These robots, about five feet six inches tall and weighing roughly 130 pounds, are equipped with 40 actuated joints (degrees of freedom), advanced rotor and sensor designs, and tactile sensors—especially in the fingertips—allowing sensitive manipulation of objects. Cameras in the robot’s palms enable visual monitoring of the grasp, matching the coordinated movement and balance of a human. The robots are engineered for dynamic tasks such as walking over uneven surfaces, recovering from pushes, and handling a range of weights, demonstrating better balance than many people.
Figure AI has achieved rapid prototyping, developing three robot generations in under four years. The company launched its first walking Figure 1 prototype in under twelve months, quickly followed by Figure 2—which saw real-world testing at BMW assembly lines—and the recently unveiled Figure 3, their most advanced model. Each new generation brought improvements: reduced mass, more powerful and efficient actuators, denser sensor arrays, integrated GPU and battery systems in the torso, and fifth-generation hands with advanced tactile and visual feedback.
Figure 3 robots display remarkable practical capabilities: they perform complex household tasks including unloading dishwashers, folding laundry, operating equipment such as Keurig coffee machines, feeding pets, and checking mail. In lab and real-world factory settings, these robots demonstrate durability and robust operation, running in shifts of up to ten hours daily for months without major faults. In industrial tests, Figure’s robots have worked alongside humans, accomplishing repetitive manufacturing tasks and handling objects up to 40 pounds. They routinely run their own diagnostic routines and calibration, autonomously performing "burpees" and other physical checks. Figure is moving toward producing a robot roughly every ninety minutes, signaling a leap in scalability.
The robots’ capabilities extend to a wide variety of complex domestic and logistics tasks. In Adcock’s own home, robots unload dishwashers, fold towels and shirts, move laundry from baskets to washers, pick up packages, and perform basic logistics work. The approach is modular: robots are trained with data on one task, such as folding towels, and can then be quickly retrained for logistics or other work, all without changing the hardware. They adapt to new types of clothes, objects, and packages as required. The robots also feature customizable “soft wraps”—wearable clothes that can be easily fitted or removed without tools.
In logistics and office environments, robots operate in 24/7 autonomous shifts. They communicate with each other to manage recharging cycles and replace one another during charging or recovery from faults—such as limping to a station if a joint fails. This ability to scale, self-diagnose, and self-correct demonstrates not just physical robustness, but autonomy at the fleet level.
The leap in Figure’s robotics advances comes from replacing hand-coded programming with neural network–driven control. Traditional robotics methods struggled with generalization and adaptability, especially in dynamic, uns ...
The Development and Capabilities of Figure's Humanoid Robots
Brett Adcock outlines a vision where humanoid robots usher in a transformative era for productivity, home life, and industry, while also underscoring the safety, ethical, and technical hurdles that must be addressed for widespread adoption.
Adcock asserts that the large-scale deployment of humanoid robots will be “crazy cool,” but will require millions of units to truly make a societal impact—a process in its earliest stages now. He describes these robots as “little mini humans” capable of using computers, machines, and performing human-like work, predicting the “greatest increase in productivity we’ve ever seen in our lifetime.” According to Adcock, robots will significantly reduce the prices of goods and services, ushering in an unprecedented “age of abundance.”
He forecasts that businesses will be the first to encounter humanoid robots because task variability and complexity are lower in industrial and commercial settings. In these environments, work is broken into well-defined steps—making it easier for robots to function, much like highway driving is simpler for autonomous vehicles than city driving. This commercial rollout is likely to scale more quickly, fueled by the fact that half of global GDP comes from human labor and three billion people are in the workforce—a massive market for robotic efficiency. Adcock notes robots can command much higher prices in these settings, offering major productivity and efficiency benefits.
Home deployment is the ultimate goal, even though it's more technically challenging. Adcock envisions humanoid robots learning users’ homes and preferences instantly, much like showing a houseguest around for the first time. He describes a robot that semantically understands and remembers instructions, adapting to unique household routines through communication. Ultimately, he predicts that “in our lifetime, we will be fortunate enough for every human to have a humanoid,” equating their future ubiquity to that of cars or phones.
Adcock imagines a world where robots handle all forms of physical and digital “busy work”—ranging from washing dishes and preparing breakfast to scheduling appointments and managing digital tasks. He describes wanting an AI “operating system” to run daily logistics, delegating everything from paying bills to arranging meetings or booking services. He anticipates this level of automation within the next 24 months, projecting that people will no longer need to handle menial labor, whether physical chores or computer-based tasks.
This shift will make manual work entirely optional. Those who enjoy gardening or mowing the lawn may still do so, but the drudgery will cease being obligatory, allowing humans to focus on time with family, creative interests, or cerebral pursuits. Adcock frames this as a fundamental reprioritization of human labor toward fulfillment over necessity. In the commercial sector, humanoid robots will proliferate in environments like manufacturing, healthcare, and construction, amplifying productivity and efficiency at massive scales.
Adcock is candid about the technical, ethical, and safety challenges involved in bringing humanoid robots into homes and workplaces. He highlights safety as the longest and most significant hurdle, especially in household settings where people must trust robots around their children. Ensuring mechanical safety is essential—robots must be safe around hazards like candles or boiling water, and must not cause injury during interaction. Adcock likens th ...
Potential Societal Impacts and Applications of Humanoid Robots
In late 2025, Brett Adcock launches Hark, a new AI lab self-funded with $100 million, aiming to build what he calls "human-centric AI." Hark is not just developing standard assistants or chatbots but aspires to create groundbreaking multimodal AI systems. These systems are intended to surpass current AI capabilities by seamlessly integrating with and enhancing human capabilities. Adcock notes that Hark already has AI systems in the lab capable of using computers like a human, making calls, managing schedules, and performing tasks upon request. Hark’s team includes top AI engineers and the lead designer from recent iPhones, suggesting an emphasis on both advanced technology and user-focused interface design.
Adcock argues that today's AI, like chatbots from Gemini or ChatGPT, fall short of true intelligence and personal capability—they can't remember interactions, intuit context, see the world, or interface fluently with the internet and other tools. He says current systems still feel like using an incognito browser: impersonal, forgetful, and limited. Adcock envisions AI that can interact naturally with users—listening, speaking, seeing, and understanding them personally—much like the fictional AI ...
Adcock's Next-Gen Ai Devices and Interfaces Plans
After selling his previous company Vettori, Brett Adcock became obsessed with addressing the issue of school shootings. Noting that the number of school shooting events in the United States surged from 30-40 incidents per year to about 300 over the last decade, he recognized the urgency of developing a scalable solution. Adcock described reading research on the problem and stumbling onto terahertz radar, also known as millimeter wave technology. This form of technology, which operates at very high-frequency radio bands (200-400 gigahertz), originally emerged from NASA’s Jet Propulsion Lab (JPL). It had been developed for detecting concealed threats at a standoff distance during the Iraq and Afghanistan wars but was mothballed after funding ended. Adcock contacted the JPL scientists, visited to see the demo machine, and found that it could reliably detect hidden weapons underneath clothing from several meters away using a radio frequency setup.
Adcock initially paused work on Cover to build Archer, but was inspired to return to the problem after a conversation with an investor—who, as a parent, urged him to fulfill his “fiduciary duty” as the technology’s developer. The urgency was compounded by Adcock’s personal experience as his daughter prepared to start first grade, and he noted how schools are vulnerable due to open access. Adcock spun the technology out of Caltech’s Jet Propulsion Lab, assembled the original development team, and established Cover’s main office in Pasadena, near JPL. He self-funded the project and reported that their prototype worked by the previous year, with a goal to beta test in schools by year’s end.
Adcock pointed out that while only a small percentage of students are reported for bringing guns to school—due to the severe consequences—he believes the absolute number is likely in the tens or hundreds of thousands each year. The infrequency of reporting masks the true scale of the problem. Alongside gun incidents, there are also hundreds of knife stabbings annually. Adcock maintains that new technology is needed because traditional tools such as CCTV and metal detectors are reactive or intrusive and cannot effectively prevent shootings.
Cover’s system is based on terahertz imaging, deploying radio frequency waves at very high frequencies to scan for concealed weapons. Unlike metal detectors or wands, the system operates at a distance—able to scan individuals at ten, twenty, or thirty meters as they walk through school entrances. It uses the returning radio waves to build what Adcock calls a “point cloud”—a 3D image constructed from radio frequency data. This form of detection generates a visual representation similar to a camera image, revealing objects such as guns or knives even if they are under clothes or inside backpacks.
Adcock explains that high-frequency radio waves are projected like radar. When the radio wave encounters a solid object, such as a weapon, it bounces back at a different rate than it would through human tissue or fabric. The system uses beam forming and advanced processing to reconstruct detailed 2D and 3D images of concealed items. AI analyzes these point clouds and images to determine the presence of weapons, distinguish between benign and potentially dangerous objects, and avoid false positives (such as mistaking a crayon box for a gun). The technology can detect a variety of weapons, including knives, and can identify items in waistbands, pockets, or bags, addressing the most common ways students carry guns into schools.
Adcock stresses the importance of keeping the technology affordable and scalable. Originally, some specialized hardware components cost tens of thousands of dollars apiece, but Cover succeeded in miniaturizing the components into custom chips that now cost just about $7 each. As a result, the hardware cost dropped by about 90%, making the system feasible for widespread deployment in both public and private schools, which are often already funded for security infrastructure.
Cover’s technology also incorporates camera systems, and in critical areas, may include microphones and higher frame rates to enhance situational awareness. The AI stack provides semantic understanding—evaluating if someone entering belongs there, if their behavior is unusual, or if the ti ...
Adcock's Work on Cover, the Weapon Detection Technology
Download the Shortform Chrome extension for your browser
