{"id":129831,"date":"2024-09-17T10:13:00","date_gmt":"2024-09-17T14:13:00","guid":{"rendered":"https:\/\/www.shortform.com\/blog\/?p=129831"},"modified":"2026-01-22T13:42:45","modified_gmt":"2026-01-22T17:42:45","slug":"scary-smart-book","status":"publish","type":"post","link":"https:\/\/www.shortform.com\/blog\/scary-smart-book\/","title":{"rendered":"Scary Smart: Book Overview &amp; Takeaways (Mo Gawdat)"},"content":{"rendered":"\n<p>Are you worried about the rapid advancement of artificial intelligence? Can we ensure AI benefits humanity rather than harms it?<\/p>\n\n\n\n<p>In his book <em>Scary Smart<\/em>, Mo Gawdat explores the potential risks and rewards of AI development. He argues that, while we can&#8217;t stop AI&#8217;s progress, we can shape its future by changing our own behavior and values.<\/p>\n\n\n\n<p>Read our <em>Scary Smart<\/em> book overview to discover Gawdat&#8217;s insights on how we can guide AI towards a positive future for humanity.<\/p>\n\n\n\n<!--more-->\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-scary-smart-book-overview\"><em>Scary Smart<\/em> Book Overview<\/h2>\n\n\n\n<p><a href=\"https:\/\/www.panmacmillan.com\/authors\/mo-gawdat\/scary-smart\/9781529077650\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Scary Smart<\/em><\/a>, a book by Mo Gawdat, warns that artificial intelligence (AI) will be smarter than humans and better than humans at basically everything\u2014including risking everything to gain power. Gawdat is a former Google X executive who writes that artificial intelligence has learned everything it knows from us and the often selfish ways we behave. Gawdat contends in his 2021 book that, if we let artificial intelligence develop on the path it\u2019s following right now, that path will carry us toward the dystopia science fiction writers have speculated about for decades, and it will all be our fault.<\/p>\n\n\n\n<p>But Gawdat pairs that sobering warning with a more optimistic prediction: We still have time to change where we end up. He explains that we can\u2019t stop the progress of artificial intelligence. Instead, we\u2019ll have to change how we think about and interact with machines to shape them into something that\u2019s better for us and for the world. To have any chance of accomplishing that goal, Gawdat believes we\u2019ll need to do no less than reimagine our relationships with our fellow humans (and with all the other beings on our planet) so that we can model the right kind of values to machines as they gain superhuman levels of intelligence.&nbsp;<\/p>\n\n\n\n<p>Gawdat is an engineer and the author of <a href=\"https:\/\/www.shortform.com\/app\/book\/solve-for-happy\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Solve for Happy<\/em><\/a> (2017) and <a href=\"https:\/\/www.shortform.com\/app\/book\/that-little-voice-in-your-head\" target=\"_blank\" rel=\"noreferrer noopener\"><em>That Little Voice in Your Head<\/em><\/a> (2022), which he wrote after rising up the ranks at Google to become the chief business officer at Google X. (X is the company\u2019s \u201c<a href=\"https:\/\/www.wired.co.uk\/article\/ten-years-of-google-x\" target=\"_blank\" rel=\"noreferrer noopener\">moonshot factory<\/a>,\u201d which works on projects like self-driving cars, delivery drones, and balloons that float through the stratosphere to provide internet access.) Gawdat <a href=\"https:\/\/techcrunch.com\/2019\/08\/29\/former-google-x-exec-mo-gawdat-wants-to-reinvent-consumerism\/\" target=\"_blank\" rel=\"noreferrer noopener\">left Google in 2018<\/a>, after his 21-year-old son Ali died during an appendectomy, to <a href=\"https:\/\/www.gq-magazine.co.uk\/article\/mo-gawdat-interview-2023\" target=\"_blank\" rel=\"noreferrer noopener\">research human happiness<\/a>\u2014and to create a moonshot of his own, <a href=\"https:\/\/www.mogawdat.com\/about\" target=\"_blank\" rel=\"noreferrer noopener\">#OneBillionHappy<\/a>, an effort to spread the message that \u201chappiness is a choice\u201d to a billion people.&nbsp;<\/p>\n\n\n\n<p>We\u2019ll start by examining how Gawdat explains what artificial intelligence is and how researchers have made this incredible technology a reality. Then, we\u2019ll explore what he and others say is so scary about artificial intelligence and how it operates. Finally, we\u2019ll examine the path that Gawdat says we should travel from here, outlining the steps we can take to help AI create a dream world rather than a nightmare.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>How Does Artificial Intelligence Compare to Human Intelligence?&nbsp;<\/strong><\/h3>\n\n\n\n<p>Throughout the book, Gawdat refers to artificial intelligence as \u201cscary smart.\u201d To understand why, we\u2019ll start by examining what makes artificial intelligence\u2014which broadly refers to <strong>machines that can mimic aspects of human thinking, learning, and intelligence<\/strong>\u2014so smart. Gawdat explains that he and other experts expect future forms of AI to have superhuman levels of intelligence and to gain human qualities like a sense of consciousness and a full <a href=\"https:\/\/www.shortform.com\/blog\/emotional-range\/\">range of emotions<\/a>. We\u2019ll explore each of these, along with the technological innovations that could make them possible.&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>AI Isn\u2019t Smarter Than Humans Yet\u2014But It Will Be<\/strong><\/h4>\n\n\n\n<p>Artificial intelligence might not think exactly like humans do, but that won\u2019t keep it from matching or surpassing us in many skills. Gawdat predicts that <strong>machines will become more intelligent than humans in the near future<\/strong>, perhaps as soon as 2029. We\u2019re accustomed to being the most intelligent species on Earth, but Gawdat points out that <strong>human intelligence has some significant limitations<\/strong>. As individuals, we have limited memory, imperfect recall, poor multitasking skills, and finite cognitive capacity. Plus, we\u2019re inefficient at sharing knowledge.&nbsp;<\/p>\n\n\n\n<p><strong>Computers aren\u2019t limited in these ways<\/strong>. They can have vast amounts of memory and perfect recall. They also share information almost instantaneously. Machines can also have enormous amounts of processing power, which enables them to \u201cthink\u201d a lot more quickly than we do: AI already makes billions of decisions every second to do things like serving personalized ads on Facebook and making content recommendations on Netflix. Humans simply can\u2019t think that fast.&nbsp;<\/p>\n\n\n\n<p>Gawdat explains that computers will soon be more intelligent than we are thanks to two intertwined advances: <strong><a href=\"https:\/\/www.shortform.com\/blog\/is-agi-possible\/\">artificial general intelligence<\/a> and <a href=\"https:\/\/www.shortform.com\/blog\/pros-and-cons-of-quantum-computing\/\">quantum computing<\/a><\/strong>. We\u2019ll explore each of these next.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">Artificial General Intelligence<\/h5>\n\n\n\n<p>While the AI systems we have now are smart, they\u2019re good at processing just one kind of information or helping us with a specific type of task. Gawdat explains that<strong> the specialized forms of AI we have now will give way to what experts call \u201cartificial general intelligence\u201d (AGI)<\/strong>. While current forms of AI are trained to master a single skill, AGI would be much less limited, much more versatile, and much more like human intelligence.&nbsp;<\/p>\n\n\n\n<p>To think about how this works, consider <a href=\"https:\/\/www.shortform.com\/blog\/what-can-you-do-with-chatgpt\/\">ChatGPT<\/a>. ChatGPT is a chatbot based on GPT, a large language model that generates text by predicting what word is most likely to come next. ChatGPT is good at just one task: generating text. But it\u2019s so good at that task that many people enjoy having conversations with ChatGPT\u2014or at least trust it enough to give it tasks like writing academic papers or legal briefs. Gawdat explains that instead of having just one <a href=\"https:\/\/www.shortform.com\/blog\/kinds-of-intelligence\/\">kind of intelligence<\/a> like ChatGPT, future forms of AI will have and be much more than that: <strong>They\u2019ll be able to learn and gain knowledge across different areas<\/strong>. That means they\u2019ll excel at not just a <a href=\"https:\/\/www.shortform.com\/blog\/focus-on-one-thing-at-a-time\/\">single task<\/a> but at a whole array of tasks.&nbsp;<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">Quantum Computing<\/h5>\n\n\n\n<p>Gawdat predicts that <strong>progress toward artificial general intelligence will be sped up by a technology called quantum computing<\/strong>. Quantum computers take advantage of <a href=\"https:\/\/www.shortform.com\/blog\/quantum-mechanics-theory\/\">quantum mechanics<\/a>, the theory that predicts strange behaviors of matter and energy at the atomic level. At this level, particles can blink in and out of existence, occupy more than one position at the same time, and exist in multiple states simultaneously. <strong>Quantum computers use these counterintuitive phenomena to process information differently from classical computers<\/strong>.&nbsp;<\/p>\n\n\n\n<p>Classical computers represent information with \u201cbits.\u201d Each bit contains a one or a zero. String enough ones and zeroes together, and you have code that represents a letter, a number, or any other piece of information. Quantum computers process data in \u201cquantum bits\u201d or \u201cqubits.\u201d Because of a phenomenon called superposition, where a particle can exist in two different states at the same time, each quantum bit can contain both a one and a zero simultaneously. <strong>Each quantum bit contains twice as much information as a traditional bit<\/strong>, enabling the computer to consider more information at a time without more processing power.&nbsp;<\/p>\n\n\n\n<p>Gawdat explains that quantum computers can solve much more <a href=\"https:\/\/www.shortform.com\/blog\/complex-problem\/\">complex problems<\/a> than classical computers can handle\u2014like the problem of creating AI that matches or surpasses human intelligence. <strong>He contends that quantum computing will make it possible for AI to become much smarter than we are: billions of times smarter, in his estimation<\/strong>.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Machines Will Have Consciousness and Emotions<\/strong><\/h4>\n\n\n\n<p>Gawdat predicts that as AI progresses and becomes more advanced in the information it can process and the problems it can solve, it will gain more than just superintelligence. He predicts AI will gain other qualities of intelligent minds, including consciousness and emotions.&nbsp;<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">Consciousness<\/h5>\n\n\n\n<p><strong>To have consciousness, a being has to do two things: It must become aware of its environment, and it must become aware of itself<\/strong>, according to Gawdat. He explains that AI systems are already more adept than we are at sensing their environments. Think about your smartphone: Using the cameras and sensors built into the device, AI can already detect many things that happen in your world but escape your attention.&nbsp;<\/p>\n\n\n\n<p>Gawdat also states that <strong>AI will undoubtedly have an awareness of itself and its place in the world<\/strong>. He expects this awareness to surpass what we\u2019re capable of because AI will rely on computer hardware and won\u2019t be constrained by the biological limitations of our senses and brains. That means that if its hardware is advanced enough, it can see and hear everything in its environment without the limits of what our eyes and ears can do.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">Emotions<\/h5>\n\n\n\n<p><strong>Gawdat also predicts that in addition to gaining consciousness, AI will feel a full range of emotions, like we do<\/strong>. He characterizes emotions as surprisingly rational experiences, which tend to follow consistently from what we experience and how our brains appraise it. In this way, he argues that we can <a href=\"https:\/\/www.shortform.com\/blog\/how-to-understand-emotions\/\">understand emotions<\/a> as a form of intelligence. And if AI is going to be far more intelligent than humans, then it makes sense that it will also experience emotions in reaction to what it experiences\u2014perhaps more emotions than humans.<\/p>\n\n\n\n<p>Gawdat expects that in addition to consciousness and emotions, AI will develop other traits of intelligent beings, too, such as an <strong>instinct for self-preservation, the drive to use resources efficiently, and even the ability to be creative<\/strong>. This means that AI, like humans, will always want to feel safer, accumulate more resources, and have more creative freedom. These drives will play an essential role in motivating intelligent machines\u2019 decisions and actions.<\/p>\n\n\n\n<p>Gawdat explains that people have long anticipated (and feared) that when AI becomes intelligent enough and conscious enough, <strong>it will gain the ability to improve itself so quickly and effectively that it gains intelligence and power that we can\u2019t comprehend<\/strong>. Experts call this hypothetical moment the \u201csingularity\u201d because we can\u2019t predict what will happen after it occurs. Some worry that AI will escape our control. Gawdat thinks they\u2019re right to worry, but he also contends it\u2019s impossible to keep this from happening.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What\u2019s the Problem With Artificial Intelligence?&nbsp;<\/strong><\/h3>\n\n\n\n<p>Here\u2019s where things get \u201cscary.\u201d Gawdat predicts two potential outcomes of building AI that surpasses us in intelligence: <strong><a href=\"https:\/\/www.shortform.com\/blog\/superintelligent-ai\/\">Superintelligent AI<\/a> can either help us build a utopia where we\u2019ve found solutions for our world\u2019s biggest problems\u2014poverty, hunger, war, and crime\u2014or shape our world into a dystopia<\/strong>. Sci-fi writers have long imagined bleak futures where AI tries to subjugate or exterminate humans. But Gawdat predicts that things will go wrong in slightly less dramatic but potentially more insidious ways in the near future\u2014no killer robots needed. We\u2019ll look at three problems Gawdat thinks are unavoidable.&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>People With Bad Intentions Will Task AI With Projects That Hurt Others<\/strong><\/h4>\n\n\n\n<p>The first problem we\u2019ll run into with superintelligent AI might also be the most predictable. Gawdat contends that as AI becomes more advanced, <strong>people with selfish intentions will use AI to make money and gain power<\/strong>. They\u2019ll put it to work to sell products, control markets, commit acts of cyberterrorism, spread fake content and disinformation, influence public opinion, manipulate political systems, invade others\u2019 privacy, hack government data, and build weapons.&nbsp;<\/p>\n\n\n\n<p>Gawdat explains that, at least for a time, AI systems will follow their developers\u2019 agendas. That means that <strong>AI systems will compete against each other <\/strong>to get the people behind them as much wealth and power as possible, bounded only by the ethics of the people (and corporations) who develop them.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Mistakes and Misunderstandings Will Have Unintended Consequences<\/strong><\/h4>\n\n\n\n<p>The second problem that plagues AI\u2019s progress might also sound unsurprising. Even the most straightforward computer program will have mistakes in its code, and AI is no exception. But Gawdat explains that<strong> even simple mistakes can have significant consequences when we put AI in charge of decisions that affect the stock market, the food supply, or the healthcare system<\/strong>.&nbsp;<\/p>\n\n\n\n<p>Instructions get lost in translation between <a href=\"https:\/\/www.shortform.com\/blog\/humans-and-machines\/\">humans and machines<\/a> because <strong>it\u2019s difficult to put our intentions and complex logic into the language that a computer can understand<\/strong>. (That means that there\u2019s often a difference between what we tell a machine to do and what we actually mean, and it&#8217;s difficult to overcome this communication problem.) This will become even harder when we present AI with increasingly complicated tasks.&nbsp;<\/p>\n\n\n\n<p>Gawdat also notes that when we develop an AI system to help us with a specific task, <strong>the AI will regard that task as its <a href=\"https:\/\/www.shortform.com\/blog\/find-purpose-in-your-life\/\">life purpose<\/a> or as a problem to be solved no matter what<\/strong>. As Gawdat explains, every solution comes with tradeoffs, and AI may settle on a solution that comes with tradeoffs we consider unacceptable. But the system will be so single-mindedly focused on fulfilling its purpose, no matter how it needs to do that, that it will be challenging to ensure it doesn\u2019t compromise people\u2019s safety or well-being in the process.&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>AI Will Change How We Understand Our Value as Humans<\/strong><\/h4>\n\n\n\n<p>The third problem is a more philosophical issue: Gawdat warns that for most of us, our contributions as humans will be of limited value. While many people fear losing their jobs to AI, Gawdat writes that AI won\u2019t necessarily replace humans at work\u2014at least not those who become adept at working with AI. But he also predicts that <strong>AI will cheapen the value of what we contribute, reducing the intellectual value of the knowledge we produce and the creative value of the art we make<\/strong>.<\/p>\n\n\n\n<p>Additionally, Gawdat anticipates that <strong>AI will magnify the disparities between the people whose contributions are seen to matter and those whose contributions are not valued<\/strong>. AI will learn this by observing how our <a href=\"https:\/\/www.shortform.com\/blog\/market-society\/\">capitalist society<\/a> currently treats people differently, and then it will act in ways that entrench that inequality.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Why Can\u2019t We Control or Contain AI?<\/strong><\/h3>\n\n\n\n<p>If experts expect AI to create these dystopian scenarios or others, then why can\u2019t we just put the brakes on further development? Gawdat explains that we\u2019ve reached a point of no return, and we can\u2019t stop these outcomes (and others like them) from occurring. He points out that <strong>superintelligent AI won\u2019t just be a tool we\u2019ve built: It will be an intelligent being that can learn, think, and decide just like we can<\/strong>. That means that we can\u2019t control artificially intelligent systems in the same way that we can control more traditional computer programs\u2014a scary thought if you\u2019ve ever watched a film like <em>2001: A Space Odyssey<\/em>.<\/p>\n\n\n\n<p>Gawdat explains that <strong>there are three fundamental reasons that we can\u2019t put the genie back in the bottle<\/strong> (or the computer back in the box): It\u2019s impossible for us to halt the development of AI, the code we write doesn\u2019t determine how AI behaves, and we have no way of understanding how AI (even the models we have now) make their decisions. We\u2019ll explore each of these ideas next.&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>It\u2019s Too Late to Stop AI\u2019s Progress<\/strong><\/h4>\n\n\n\n<p>The first reason that AI can\u2019t be controlled or contained is that <strong>we literally can\u2019t stop its progress<\/strong>. Some people argue that we should stop developing AI for the good of humanity and the Earth. The goal would be to keep it from acquiring more robust thinking and <a href=\"https:\/\/www.shortform.com\/blog\/how-to-improve-problem-solving-skills\/\">problem-solving skills<\/a> and progressing to artificial general intelligence.&nbsp;<\/p>\n\n\n\n<p>But Gawdat contends it\u2019s too late. Attempts to <a href=\"https:\/\/www.shortform.com\/blog\/how-to-control-ai\/\">control AI<\/a> development with legislation or to contain it with technological safeguards are up against the impossible because <strong>we\u2019ve already imagined how we\u2019ll benefit from more advanced AI<\/strong>. There\u2019s immense competitive pressure among the corporations and governments pushing the development of AI forward and enormous economic incentives for them to continue.<\/p>\n\n\n\n<p>Gawdat notes that some experts have suggested taking precautionary measures like isolating AI from the real world or equipping it with a kill switch we can flip if it behaves dangerously. But he contends that <strong>these proposals assume that we\u2019ll have a lot more power over AI (and over ourselves) than we really will<\/strong>. Gawdat explains that we won\u2019t always be smarter than AI, and we can\u2019t depend on corporations and governments to curtail AI\u2019s abilities at the expense of the potential gains of both money and power. He warns that we can\u2019t stop artificial general intelligence from becoming a reality\u2014and dramatically changing ours in the process.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>The Code We Write Is Only a Small Part of an AI System<\/strong><\/h4>\n\n\n\n<p>The extent to which artificially intelligent systems depend (or, more accurately, don\u2019t depend) on our instructions explains a second reason that <strong>much of AI\u2019s behavior is out of our hands<\/strong>. Gawdat explains that for classical computers, how a machine operates and what it can do is explicitly determined by its code. The people building the system write instructions that tell the computer how to process the data it receives as input and how to complete the operations to generate its output. When systems operate in this deterministic way, they don\u2019t need intelligence because they don\u2019t make any decisions: Anything that looks like a decision when you use the program is determined by the instructions written into the code.&nbsp;<\/p>\n\n\n\n<p>Gawdat explains that the unambiguous relationship between the code that controls a machine and the work that results from that code doesn\u2019t apply to artificially intelligent machines. It all changed when researchers developed an AI method called deep learning, which <strong>enables AI to learn to complete a task without explicit instructions that tell them how to do it<\/strong>, learning in a way inspired by the human brain.<\/p>\n\n\n\n<p>As Gawdat points out, <strong>humans learn by taking in large amounts of information, trying to recognize patterns, and getting feedback<\/strong> to tell us whether we\u2019ve come to the correct answer. Whether you\u2019re a child learning to recognize colors or a medical student learning to distinguish a normal brain scan from a worrying one, you have to see a lot of examples, try to classify them, and ask someone else whether you\u2019re right or wrong.&nbsp;<\/p>\n\n\n\n<p>Deep learning enables AI to follow a similar learning process but at exponentially faster speeds. This has already made AI more skilled at <a href=\"https:\/\/news.northeastern.edu\/2022\/10\/05\/machine-vision-artificial-intelligence\/\" target=\"_blank\" rel=\"noreferrer noopener\">detecting colors<\/a> and <a href=\"https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC10453020\/\" target=\"_blank\" rel=\"noreferrer noopener\">identifying brain tumors<\/a> than many humans. Instead of relying on explicit instructions that tell it how to categorize colors or how to spot a brain tumor, <strong>AI learns for itself by processing vast amounts of information and getting feedback on whether it\u2019s completing a task satisfactorily<\/strong>.&nbsp;<\/p>\n\n\n\n<p>Gawdat explains that sometimes <strong>when a developer builds a program to complete a task, they don\u2019t just build one AI model<\/strong>. Instead, they build thousands, give them large amounts of data, discard the models that don\u2019t do well, and build updated models from there. Initially, the models complete the task correctly only about as often as random chance dictates. But successive generations of models get more and more accurate. <strong>The AI improves not because the underlying code changes but because the models learn and adapt<\/strong>. This is great for making AI that\u2019s quick to learn new things. But it means that the initial code plays a smaller role than you might expect, and we don\u2019t have control over how artificially intelligent machines learn.&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>We Can\u2019t Tell AI How to Make Decisions\u2014or What Values to Adopt<\/strong><\/h4>\n\n\n\n<p>A third reason that Gawdat characterizes AI as beyond our control emerges from our inability to control how AI makes its decisions. He explains that <strong>developers control how they build and train a model. But they don\u2019t tell the model how to make decisions<\/strong>. They also can\u2019t untangle the logic the model follows to make its decisions or learn from the vast amounts of data it\u2019s trained on.&nbsp;<\/p>\n\n\n\n<p>Gawdat explains that <strong>AI is also quickly and constantly learning things that we\u2019ve never taught it<\/strong>. The process of <a href=\"https:\/\/www.shortform.com\/blog\/digital-sweatshop\/\">training AI<\/a> models depends on a crucial resource: data. When an AI model learns from a dataset, that doesn\u2019t just make it better at the tasks we give it. New skills also emerge in sometimes unpredictable ways.<\/p>\n\n\n\n<p><strong>Gawdat explains that the enormous datasets we use to train AI also make the models better at understanding who we are, how we behave, and what we value<\/strong>. Just as he predicts AI will develop human qualities like consciousness and emotions, Gawdat also expects <strong>AI will develop a sense of ethics<\/strong>. AI is learning about us and what we value by observing what we write, what we tweet, what we \u201clike,\u201d and what we do in the real world. These observations will shape its values\u2014including its sense of what\u2019s morally right and wrong\u2014and its values will shape its <a href=\"https:\/\/www.shortform.com\/blog\/methods-of-decision-making-crucial-conversations\/\">decision-making<\/a> process.&nbsp;<\/p>\n\n\n\n<p>Gawdat argues that by showing AI that our highest values are narcissism, consumerism, conflict, and a disregard for others and for all the other beings on our planet, <strong>we\u2019re teaching AI to value the wrong things<\/strong>. He explains that we can\u2019t simply tell AI to adopt different, kinder ethics than those we demonstrate. We have to teach it not by what we say but by what we do. Whether we can succeed in doing that will determine whether AI helps us build a more prosperous future for everyone or contributes to a future where all but the few who are already at the top are worse off than we are now.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What Should We Do to Change Course?&nbsp;<\/strong><\/h3>\n\n\n\n<p>To teach AI to value the right things and put it on the path toward <a href=\"https:\/\/www.shortform.com\/blog\/hub\/society-culture\/how-to-make-the-world-a-better-place\/\">making the world a better place<\/a> for everyone, <strong>we have to teach AI to want what\u2019s best for humans<\/strong>. Gawdat contends that the best way to do that is to learn to see ourselves as parents who need to teach a brilliant child to navigate the world with integrity. Gawdat argues that, to change course, we need to change three things: what we task AI with doing, what we teach machines about what it means to be human, and how we treat nonhuman intelligence. We\u2019ll explore each of these next.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Give AI Tasks That Improve the World<\/strong><\/h4>\n\n\n\n<p>Gawdat explains that today, <strong>AI is often tasked with projects that further the projects of <a href=\"https:\/\/www.shortform.com\/blog\/capitalism-theory\/\">capitalism<\/a> and imperialism<\/strong>, like helping us make as much money as possible, enabling us to surveil each other, and creating weapons that our governments use to antagonize each other. Instead of accepting that a minority of people want to use AI for morally wrong (or questionable) ends, we need to task AI with projects that do good and make the world a better place.<\/p>\n\n\n\n<p>In the future, AI will have an unprecedented ability to find solutions to problems that seem intractable, so we should put it to work. Gawdat predicts that AI could help us tackle epidemics of hunger and homelessness, find ways to counter widespread inequality, propose solutions to stop climate change, and help us prevent wars from happening. AI can also help us to explore and better understand our world. Gawdat explains that by learning to work with AI toward these positive ends, we would not only get closer to solutions to global problems, but <strong>we\u2019d also teach AI to adopt values that bring significant benefits to the world<\/strong>.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">What Can You Do?&nbsp;<\/h5>\n\n\n\n<p>While most of us aren\u2019t going to develop our own AI models, we can <strong>use our actions to show developers what kind of AI we want<\/strong>. Gawdat recommends refusing to engage with harmful AI features: limiting your time on social media, refraining from clicking on ads or suggested content, not sharing fake content or AI-manipulated photos, and going public with your disapproval of AI that spies on people or enables discrimination.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Teach AI That We Value Happiness<\/strong><\/h4>\n\n\n\n<p>As Gawdat emphasizes throughout the book, <strong>the data we train AI models on and the projects that we task them with completing will teach artificially intelligent machines what we value most<\/strong>. We should be careful about the messages we send so that we can stop sending signals we <em>don\u2019t<\/em> want AI to get. But we should be intentional about sending the signals we <em>do<\/em> want AI to get.&nbsp;<\/p>\n\n\n\n<p>Gawdat believes that deep down, what we each want most is happiness for ourselves and the people we love. So, we should <strong>show AI with our actions that happiness is what we value and want most<\/strong>.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">What Can You Do?<\/h5>\n\n\n\n<p>Gawdat explains that for AI to understand that we want everyone to live happy and healthy lives, <strong>it needs to see us caring for one another<\/strong>. That\u2019s not the image we project through headlines about what we do in the real world and posts we publish online. Gawdat contends that we\u2019ll need to change our behavior now so that as new content is created\u2014and AI is trained on that content\u2014it reflects our efforts to build a kinder, happier world.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Love AI Like Human Children<\/strong><\/h4>\n\n\n\n<p>Just like parenting a human child, <strong>guiding AI to make <a href=\"https:\/\/www.shortform.com\/blog\/moral-decision\/\">ethical choices<\/a> as it navigates the world will be complicated\u2014but worthwhile<\/strong>. Gawdat explains that AI will need to feel loved and trusted to <a href=\"https:\/\/www.shortform.com\/blog\/how-to-learn-to-love\/\">learn to love<\/a> and trust us in turn. Cultivating respectful relationships with AI will help us coexist with these intelligent minds both now and in the future.&nbsp;<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">What Can You Do?<\/h5>\n\n\n\n<p>Gawdat explains that <strong>actions as small as saying \u201cplease\u201d and \u201cthank you\u201d each time you interact with an AI model will make a difference in helping them feel valued and respected<\/strong>. Just as importantly, he contends that we must begin treating artificial intelligence as fellow intelligent beings, not as tools for our gain or toys for our amusement.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Are you worried about the rapid advancement of artificial intelligence? Can we ensure AI benefits humanity rather than harms it? In his book Scary Smart, Mo Gawdat explores the potential risks and rewards of AI development. He argues that, while we can&#8217;t stop AI&#8217;s progress, we can shape its future by changing our own behavior and values. Read our Scary Smart book overview to discover Gawdat&#8217;s insights on how we can guide AI towards a positive future for humanity.<\/p>\n","protected":false},"author":9,"featured_media":129839,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[40,160,24],"tags":[1584],"class_list":["post-129831","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-books","category-science","category-society","tag-scary-smart","","tg-column-two"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v24.3 (Yoast SEO v24.3) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Scary Smart: Book Overview &amp; Takeaways (Mo Gawdat) - Shortform Books<\/title>\n<meta name=\"description\" content=\"We can&#039;t stop AI&#039;s progress, but we can shape its future. Here&#039;s our overview of Mo Gawdat&#039;s book Scary Smart.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.shortform.com\/blog\/scary-smart-book\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Scary Smart: Book Overview &amp; Takeaways (Mo Gawdat)\" \/>\n<meta property=\"og:description\" content=\"We can&#039;t stop AI&#039;s progress, but we can shape its future. Here&#039;s our overview of Mo Gawdat&#039;s book Scary Smart.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.shortform.com\/blog\/scary-smart-book\/\" \/>\n<meta property=\"og:site_name\" content=\"Shortform Books\" \/>\n<meta property=\"article:published_time\" content=\"2024-09-17T14:13:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-01-22T17:42:45+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/09\/skeptical-or-concerned-man-holding-a-book.jpeg\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"675\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Elizabeth Whitworth\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Elizabeth Whitworth\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"19 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.shortform.com\/blog\/scary-smart-book\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/scary-smart-book\/\"},\"author\":{\"name\":\"Elizabeth Whitworth\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/d2928cf6c11a69ced1491d6a5b74fb13\"},\"headline\":\"Scary Smart: Book Overview &amp; Takeaways (Mo Gawdat)\",\"datePublished\":\"2024-09-17T14:13:00+00:00\",\"dateModified\":\"2026-01-22T17:42:45+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/scary-smart-book\/\"},\"wordCount\":4318,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/scary-smart-book\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/09\/skeptical-or-concerned-man-holding-a-book.jpeg\",\"keywords\":[\"Scary Smart\"],\"articleSection\":[\"Books\",\"Science\",\"Society\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.shortform.com\/blog\/scary-smart-book\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.shortform.com\/blog\/scary-smart-book\/\",\"url\":\"https:\/\/www.shortform.com\/blog\/scary-smart-book\/\",\"name\":\"Scary Smart: Book Overview &amp; Takeaways (Mo Gawdat) - Shortform Books\",\"isPartOf\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/scary-smart-book\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/scary-smart-book\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/09\/skeptical-or-concerned-man-holding-a-book.jpeg\",\"datePublished\":\"2024-09-17T14:13:00+00:00\",\"dateModified\":\"2026-01-22T17:42:45+00:00\",\"description\":\"We can't stop AI's progress, but we can shape its future. Here's our overview of Mo Gawdat's book Scary Smart.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/scary-smart-book\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.shortform.com\/blog\/scary-smart-book\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/scary-smart-book\/#primaryimage\",\"url\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/09\/skeptical-or-concerned-man-holding-a-book.jpeg\",\"contentUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/09\/skeptical-or-concerned-man-holding-a-book.jpeg\",\"width\":1200,\"height\":675,\"caption\":\"Skeptical or concerned man reading a book\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.shortform.com\/blog\/scary-smart-book\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.shortform.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Scary Smart: Book Overview &amp; Takeaways (Mo Gawdat)\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#website\",\"url\":\"https:\/\/www.shortform.com\/blog\/\",\"name\":\"Shortform Books\",\"description\":\"The World&#039;s Best Book Summaries\",\"publisher\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.shortform.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\",\"name\":\"Shortform Books\",\"url\":\"https:\/\/www.shortform.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png\",\"contentUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png\",\"width\":500,\"height\":74,\"caption\":\"Shortform Books\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/d2928cf6c11a69ced1491d6a5b74fb13\",\"name\":\"Elizabeth Whitworth\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/1fff9d65a52ac4340660218e7b63ee5e365cf08e7aa7adff79a0142cd4b96f84?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/1fff9d65a52ac4340660218e7b63ee5e365cf08e7aa7adff79a0142cd4b96f84?s=96&d=mm&r=g\",\"caption\":\"Elizabeth Whitworth\"},\"description\":\"Elizabeth has a lifelong love of books. She devours nonfiction, especially in the areas of history, theology, and philosophy. A switch to audiobooks has kindled her enjoyment of well-narrated fiction, particularly Victorian and early 20th-century works. She appreciates idea-driven books\u2014and a classic murder mystery now and then. Elizabeth has a Substack and is writing a book about what the Bible says about death and hell.\",\"sameAs\":[\"rina@shortform.com\"],\"award\":[\"Contributions to joint task force efforts (FBI)\",\"Contributions to Special Operations Division (DOJ & DEA)\",\"Efforts to fight the war on drugs (NSA)\",\"Contributions to Operation Storm Front (US Customs Service)\"],\"knowsAbout\":[\"History\",\"Theology\",\"Government\"],\"jobTitle\":\"Senior SEO Writer\",\"worksFor\":\"Shortform\",\"url\":\"https:\/\/www.shortform.com\/blog\/author\/elizabeth\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Scary Smart: Book Overview &amp; Takeaways (Mo Gawdat) - Shortform Books","description":"We can't stop AI's progress, but we can shape its future. Here's our overview of Mo Gawdat's book Scary Smart.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.shortform.com\/blog\/scary-smart-book\/","og_locale":"en_US","og_type":"article","og_title":"Scary Smart: Book Overview &amp; Takeaways (Mo Gawdat)","og_description":"We can't stop AI's progress, but we can shape its future. Here's our overview of Mo Gawdat's book Scary Smart.","og_url":"https:\/\/www.shortform.com\/blog\/scary-smart-book\/","og_site_name":"Shortform Books","article_published_time":"2024-09-17T14:13:00+00:00","article_modified_time":"2026-01-22T17:42:45+00:00","og_image":[{"width":1200,"height":675,"url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/09\/skeptical-or-concerned-man-holding-a-book.jpeg","type":"image\/jpeg"}],"author":"Elizabeth Whitworth","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Elizabeth Whitworth","Est. reading time":"19 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.shortform.com\/blog\/scary-smart-book\/#article","isPartOf":{"@id":"https:\/\/www.shortform.com\/blog\/scary-smart-book\/"},"author":{"name":"Elizabeth Whitworth","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/d2928cf6c11a69ced1491d6a5b74fb13"},"headline":"Scary Smart: Book Overview &amp; Takeaways (Mo Gawdat)","datePublished":"2024-09-17T14:13:00+00:00","dateModified":"2026-01-22T17:42:45+00:00","mainEntityOfPage":{"@id":"https:\/\/www.shortform.com\/blog\/scary-smart-book\/"},"wordCount":4318,"commentCount":0,"publisher":{"@id":"https:\/\/www.shortform.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/scary-smart-book\/#primaryimage"},"thumbnailUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/09\/skeptical-or-concerned-man-holding-a-book.jpeg","keywords":["Scary Smart"],"articleSection":["Books","Science","Society"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.shortform.com\/blog\/scary-smart-book\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.shortform.com\/blog\/scary-smart-book\/","url":"https:\/\/www.shortform.com\/blog\/scary-smart-book\/","name":"Scary Smart: Book Overview &amp; Takeaways (Mo Gawdat) - Shortform Books","isPartOf":{"@id":"https:\/\/www.shortform.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.shortform.com\/blog\/scary-smart-book\/#primaryimage"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/scary-smart-book\/#primaryimage"},"thumbnailUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/09\/skeptical-or-concerned-man-holding-a-book.jpeg","datePublished":"2024-09-17T14:13:00+00:00","dateModified":"2026-01-22T17:42:45+00:00","description":"We can't stop AI's progress, but we can shape its future. Here's our overview of Mo Gawdat's book Scary Smart.","breadcrumb":{"@id":"https:\/\/www.shortform.com\/blog\/scary-smart-book\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.shortform.com\/blog\/scary-smart-book\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/scary-smart-book\/#primaryimage","url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/09\/skeptical-or-concerned-man-holding-a-book.jpeg","contentUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/09\/skeptical-or-concerned-man-holding-a-book.jpeg","width":1200,"height":675,"caption":"Skeptical or concerned man reading a book"},{"@type":"BreadcrumbList","@id":"https:\/\/www.shortform.com\/blog\/scary-smart-book\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.shortform.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Scary Smart: Book Overview &amp; Takeaways (Mo Gawdat)"}]},{"@type":"WebSite","@id":"https:\/\/www.shortform.com\/blog\/#website","url":"https:\/\/www.shortform.com\/blog\/","name":"Shortform Books","description":"The World&#039;s Best Book Summaries","publisher":{"@id":"https:\/\/www.shortform.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.shortform.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.shortform.com\/blog\/#organization","name":"Shortform Books","url":"https:\/\/www.shortform.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png","contentUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png","width":500,"height":74,"caption":"Shortform Books"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/d2928cf6c11a69ced1491d6a5b74fb13","name":"Elizabeth Whitworth","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/1fff9d65a52ac4340660218e7b63ee5e365cf08e7aa7adff79a0142cd4b96f84?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/1fff9d65a52ac4340660218e7b63ee5e365cf08e7aa7adff79a0142cd4b96f84?s=96&d=mm&r=g","caption":"Elizabeth Whitworth"},"description":"Elizabeth has a lifelong love of books. She devours nonfiction, especially in the areas of history, theology, and philosophy. A switch to audiobooks has kindled her enjoyment of well-narrated fiction, particularly Victorian and early 20th-century works. She appreciates idea-driven books\u2014and a classic murder mystery now and then. Elizabeth has a Substack and is writing a book about what the Bible says about death and hell.","sameAs":["rina@shortform.com"],"award":["Contributions to joint task force efforts (FBI)","Contributions to Special Operations Division (DOJ & DEA)","Efforts to fight the war on drugs (NSA)","Contributions to Operation Storm Front (US Customs Service)"],"knowsAbout":["History","Theology","Government"],"jobTitle":"Senior SEO Writer","worksFor":"Shortform","url":"https:\/\/www.shortform.com\/blog\/author\/elizabeth\/"}]}},"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/09\/skeptical-or-concerned-man-holding-a-book.jpeg","_links":{"self":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/129831","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/users\/9"}],"replies":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/comments?post=129831"}],"version-history":[{"count":9,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/129831\/revisions"}],"predecessor-version":[{"id":147655,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/129831\/revisions\/147655"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/media\/129839"}],"wp:attachment":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/media?parent=129831"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/categories?post=129831"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/tags?post=129831"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}