{"id":115858,"date":"2023-10-26T11:51:00","date_gmt":"2023-10-26T15:51:00","guid":{"rendered":"https:\/\/www.shortform.com\/blog\/?p=115858"},"modified":"2023-10-26T13:58:13","modified_gmt":"2023-10-26T17:58:13","slug":"nick-bostrom-superintelligence","status":"publish","type":"post","link":"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/","title":{"rendered":"Nick Bostrom&#8217;s Superintelligence: Overview &#038; Takeaways"},"content":{"rendered":"\n<p>What&#8217;s Nick Bostrom&#8217;s <em>Superintelligence<\/em> about? Could artificial intelligence ever surpass human intelligence?<\/p>\n\n\n\n<p>According to Oxford philosopher Nick Bostrom&#8217;s book <em>Superintelligence<\/em>, there\u2019s a very real possibility that AI could one day rival, and then vastly exceed, human intelligence. When and if this happens, the future of humankind will depend more on AI-generated decisions than human decisions.<\/p>\n\n\n\n<p>Read below for a brief overview of <em>Superintelligence<\/em> by Nick Bostrom.<\/p>\n\n\n\n<!--more-->\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-superintelligence-by-nick-bostrom\"><strong><em>Superintelligence<\/em> by Nick Bostrom<\/strong><\/h2>\n\n\n\n<p>Oxford philosopher Nick Bostrom&#8217;s <a href=\"https:\/\/global.oup.com\/academic\/product\/superintelligence-9780198739838?cc=us&amp;lang=en&amp;\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Superintelligence<\/em><\/a> was written in 2014 to raise awareness about the possibility of AI suddenly exceeding human capabilities, spark discussion about the risks inherent in this scenario, and foster collaboration in managing those risks.<\/p>\n\n\n\n<p>Today, the possibility of AI rivaling or even vastly exceeding human intelligence doesn\u2019t seem as far-fetched as it did when Bostrom was writing a decade ago. In this article, we\u2019ll consider his arguments for the feasibility of an AI rising to superhuman intelligence and his belief that creating such an AI without the right controls in place could be the worst\u2014and maybe the last\u2014mistake in human history. Finally, we\u2019ll take a look at the controls Bostrom says we need to implement to build AI\u2019s safely.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-the-feasibility-of-superintelligent-ai\"><strong>The Feasibility of Superintelligent AI<\/strong><\/h3>\n\n\n\n<p>Bostrom defines \u201csuperintelligence\u201d as general intelligence that\u2019s significantly greater than human-level intelligence. As he explains, \u201cgeneral intelligence\u201d refers to intellectual abilities that span the whole range of human capabilities, such as learning, interpreting raw data to draw useful inferences, making decisions, recognizing risks or uncertainties, and allowing for uncertainties when making decisions. He notes that while some computers already surpass humans in certain narrow areas, such as playing a certain game or crunching numbers, no AI has yet come close to human-level general intelligence.&nbsp;<\/p>\n\n\n\n<p>But could an artificial, nonhuman entity ever have superintelligence? Bostrom argues that the answer is, most likely, yes. As he explains, silicon computers have a number of advantages over human brains. For one thing, they operate much faster. Neural signals travel about 120 meters per second and neurons can cycle at a maximum frequency of about 200 Hertz. By contrast, electronic signals travel at the speed of light (300,000,000 meters per second) and electronic processors often cycle at 2 billion Hertz or more. In addition, computers can copy and share data and software directly, while humans have to learn gradually.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-different-routes-to-superintelligent-ai\"><strong>Different Routes to Superintelligent AI<\/strong><\/h4>\n\n\n\n<p>As Bostrom explains, there are a number of different ways that <a href=\"https:\/\/www.shortform.com\/blog\/superintelligent-ai\/\">superintelligent AI<\/a> could be achieved. Thus, even if some of them don\u2019t end up working, at least one of them probably will.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"h-intelligent-design\">Intelligent Design<\/h5>\n\n\n\n<p>One route to superintelligent AI that Bostrom discusses is human programmers developing a \u201cseed AI\u201d that has some level of general intelligence\u2014perhaps similar to human intelligence, or maybe a little below that mark. Then they use the AI to continue improving the program. As the AI gets smarter, it improves itself more quickly. Because of this self-reinforcing cycle, it might progress from sub-human to superhuman intelligence rather quickly.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"h-simulated-evolution\">Simulated Evolution<\/h5>\n\n\n\n<p>Another route that Bostrom discusses is \u201csimulated evolution.\u201d In the context of software engineering, this means programming a computer to generate random variations of a program, test their functionality against specified criteria, and continue to iterate on the best ones. Theoretically, simulated evolution can provide novel solutions to programming problems without the need for new insight on the part of human programmers. Thus, even if human programmers can\u2019t figure out how to create a superintelligent AI or a self-improving seed AI directly, they might be able to create one using simulated evolution.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"h-brain-simulations\">Brain Simulations<\/h5>\n\n\n\n<p>Yet another route to which Bostrom devotes considerable attention is \u201cwhole brain emulation.\u201d The human brain is obviously capable of human-level general intelligence. Thus, if you could map out exactly how all the neurons in a human brain are connected and create a computer program to accurately simulate all those connections, you would have a program capable of human-level intelligence. And if the computer program could operate faster than the original brain, it would have superhuman intelligence.<\/p>\n\n\n\n<p>Bostrom explains that creating a simulated human brain requires a basic understanding of how neurons interact with each other and a detailed cellular-level 3D scan of a human brain. However, it <em>doesn\u2019t<\/em> require an understanding of how the brain\u2019s structures give rise to intelligence\u2014assuming the simulation captures the placement of neurons accurately, it should, theoretically, mimic the brain\u2019s function even if its developers don\u2019t know exactly how or why. Thus, the main obstacle to implementing this method is scanning a human brain precisely enough.&nbsp;<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"h-spontaneous-generation\">Spontaneous Generation<\/h5>\n\n\n\n<p>Finally, Bostrom points out that it might be possible to create a superintelligent AI <em>inadvertently<\/em>. Scientists don\u2019t know exactly what the minimum set of components or capabilities for general intelligence is. There\u2019s already a lot of software that performs specific information processing operations and has the ability to send and receive data over the internet. Hypothetically, a programmer could create a piece of software that, by itself, isn\u2019t even considered AI, but it happens to be the final component of general intelligence. This would then allow a superintelligent AI to arise spontaneously on the internet as the new software begins to communicate with all the other software that\u2019s already running.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-the-consequences-of-superintelligent-ai\"><strong>The Consequences of Superintelligent AI<\/strong><\/h3>\n\n\n\n<p>So, sooner or later, a superintelligent AI will be created. Why should that concern you any more than the fact that mechanical vehicles can go faster than a human can run? According to Bostrom, the rise of a superintelligent AI could cause dramatic changes in how the world works\u2014changes that would take place very quickly. And depending on the superintelligent AI\u2019s behavior, these changes could be very detrimental to humanity.<\/p>\n\n\n\n<p>As we mentioned earlier, if an AI has some measure of general intelligence and the ability to modify its own programming, its intelligence would likely increase at an ever-accelerating rate. This implies that an AI might rise from sub-human to superhuman intelligence very quickly.&nbsp;<\/p>\n\n\n\n<p>Moreover, as Bostrom points out, superior intelligence is what has allowed humans to dominate the other life forms on planet Earth. Thus, it stands to reason that once a superintelligent AI exists, the fate of humanity will suddenly depend more on what the superintelligent AI does than on what humans do\u2014just as the existence of most animal species depends more on what humans do (either to take care of domestic animals or to preserve or destroy habitat that sustains wild animals) than on what the animals themselves do.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-the-abilities-of-superintelligent-ai\"><strong>The Abilities of Superintelligent AI<\/strong><\/h4>\n\n\n\n<p>But how would a superintelligent AI actually gain or wield power over the earth if it exists only as a computer program? Bostrom lists some abilities that an AI would have as soon as it became superintelligent.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>It would be capable of strategic thinking.<\/strong> Consequently, it could develop plans to achieve long-term objectives and would take into account any applicable opposition.<\/li><li><strong>It could manipulate and persuade.<\/strong> It could figure out how to get humans to do what it wanted them to, much like a human might train a dog to play fetch. Humans might not even realize the superintelligent AI was trying to manipulate them.<\/li><li><strong>It would be a superlative hacker.<\/strong> It could gain access to virtually all networked technology without needing anyone\u2019s permission.<\/li><li><strong>It would be good at engineering and development.<\/strong> If it needed new technology or other devices that didn\u2019t exist yet in order to achieve its objectives, it could design them.<\/li><li><strong>It would be capable of business thinking.<\/strong> It could figure out ways to generate income and amass financial resources.<\/li><\/ul>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-the-destructiveness-of-superintelligent-ai\"><strong>The Destructiveness of Superintelligent AI<\/strong><\/h4>\n\n\n\n<p>Clearly, a superintelligent AI with the capabilities listed above would be a powerful entity. But why should we expect it to use its power to the detriment of humankind? Wouldn\u2019t a superintelligent AI be smart enough to use its power responsibly?&nbsp;<\/p>\n\n\n\n<p>According to Bostrom, not necessarily. He explains that <em>intelligence<\/em> is the ability to figure out <a href=\"https:\/\/www.shortform.com\/blog\/how-to-achieve\/\">how to achieve<\/a> your objectives. By contrast, <em>wisdom<\/em> is the ability to discern between good and bad objectives. Wisdom and intelligence are independent of each other: You can be good at figuring out <a href=\"https:\/\/www.shortform.com\/blog\/how-to-get-things-done\/\">how to get things done<\/a> (high intelligence) and yet have poor judgment (low wisdom) about <em>what<\/em> is important to get done or even ethically appropriate.&nbsp;<\/p>\n\n\n\n<p>What objectives would a superintelligent AI want to pursue? According to Bostrom, this is impossible to predict with certainty. However, he points out that existing AIs tend to have relatively narrow and simplistic objectives. If an AI started out with narrowly defined objectives and then became superintelligent without modifying its objectives, the results could be disastrous: Since power can be used to pursue almost any objective more effectively, such an AI might use up all the world\u2019s resources to pursue its objectives, disregarding all other concerns.<\/p>\n\n\n\n<p>For example, a stock-trading AI might be programmed to maximize the long-term <a href=\"https:\/\/www.shortform.com\/blog\/how-to-calculate-expected-value\/\">expected value<\/a> (measured in dollars) of the portfolio that it manages. If this AI became superintelligent, it might find a way to trigger hyperinflation, because devaluing the dollar by a large factor would radically increase the dollar value of its portfolio. It would probably also find a way to lock out the original owners of the portfolio it was managing, to prevent them from withdrawing any money and thereby reducing the value of the account.&nbsp;<\/p>\n\n\n\n<p>Moreover, it might pursue an agenda of world domination just because more power would put it in a better position to increase the value of its portfolio\u2014whether by influencing markets, commandeering assets to add to its portfolio, or other means. It would have no regard for human wellbeing, except insofar as human wellbeing affected the value of its portfolio. And since human influences on stock prices can be fickle, it might even take action to remove all humans from the market so as to reduce the uncertainty in its value projections. Eventually, it would amass all the world\u2019s wealth into its portfolio, leaving humans impoverished and perhaps even starving humanity into extinction.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-how-to-manage-the-rise-of-superhuman-intelligence\"><strong>How to Manage the Rise of Superhuman Intelligence<\/strong><\/h3>\n\n\n\n<p>What can we do to make sure a superintelligent AI doesn\u2019t destroy humankind or relegate humans to miserable living conditions?&nbsp;<\/p>\n\n\n\n<p>In principle, one option would be never to develop general AI in the first place. However, Bostrom doesn\u2019t recommend this option. In practice, even if AI research was illegal, someone would probably do it anyway. And even if they didn\u2019t, as we discussed earlier, it could still happen accidentally.&nbsp;<\/p>\n\n\n\n<p>But more importantly, Bostrom points out that a superintelligent AI could also be very good for humanity if it helped us instead of wiping us out. The superintelligent AI might be able to develop solutions to problems that humans have thus far been unable to solve, like reining in climate change, colonizing outer space, and bringing about world peace. Thus, rather than opposing AI research, Bostrom advocates a three-pronged approach to making sure it\u2019s beneficial: Impose limits on the superintelligent AI, give it good objectives, and manage the development schedule to make sure the right measures are in place before AI achieves superintelligence. We\u2019ll discuss each of these in turn.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-imposing-limits-on-a-superhuman-ai\"><strong>Imposing Limits on a Superhuman AI<\/strong><\/h4>\n\n\n\n<p>Bostrom cautions that a superintelligent AI would eventually be able to circumvent any controls or limitations that humans placed upon it. However, that doesn\u2019t mean imposing limits is a waste of time. It just means we need to understand the risks associated with different kinds of limits that might be used.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"h-physical-containment\">Physical Containment<\/h5>\n\n\n\n<p>One safety measure is simply to develop AI on a computer of limited power that is physically isolated so it can\u2019t connect to the internet. In principle, this could allow us to study superintelligent AIs in isolation until we understand how to deploy them safely in other contexts.<\/p>\n\n\n\n<p>However, in practice, this is still risky. It might be difficult to assess just how intelligent an AI under study has become. A superintelligent AI would probably realize it was being held in confinement and figure out what kind of behavior the human researchers were trying to induce. It might then feign docile or dumb behavior to put them at ease, or find other ways to manipulate them into giving it access to additional hardware. Between manipulating humans and finding novel ways to use the hardware at its disposal, a sufficiently intelligent AI could eventually circumvent physical containment measures.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"h-manual-review\">Manual Review<\/h5>\n\n\n\n<p>Bostrom notes that many people have suggested simply programming the AI to ask for permission from its human operators before it does anything. Instead of allowing it to make improvements to its own programming any time, the program could require <a href=\"https:\/\/www.shortform.com\/blog\/human-intervention\/\">human intervention<\/a> to approve each new version. This would give programmers a chance to look over the code, giving them an idea of what additional capabilities the new version would have and allowing them to suspend the AI\u2019s development at any stage. Similarly, the AI\u2019s programming could require human intervention before any action that could affect the outside world.&nbsp;<\/p>\n\n\n\n<p>However, as the AI\u2019s intelligence advanced beyond the human level, eventually human programmers wouldn\u2019t be able to understand the code it proposed well enough to accurately assess what new capabilities and risks it would add.&nbsp;<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"h-reward-and-punishment-signals\">Reward and Punishment Signals<\/h5>\n\n\n\n<p>Another option that Bostrom discusses is to program the AI to respond to rewards and punishments. You could build a computer system with a reward button and a punishment button and program the AI to minimize the number of punishment signals it receives and maximize the number of reward signals. This would be easier to program than trying to translate \u201cjust do whatever your operators want you to do\u201d into computer code, and it would achieve the same result.<\/p>\n\n\n\n<p>The risk, Bostrom explains, is that the AI might eventually circumvent the system. For example, maybe it builds a robot to push the reward button constantly and finds a way to keep humans out of the building so the punishment button cannot be pressed.<\/p>\n\n\n\n<p>And if it worked correctly, giving the human operators full control over the AI, that would create another risk: As we\u2019ve discussed, a superintelligent AI would be immensely powerful. Human operators might be tempted to abuse that power.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"h-simultaneous-development\">Simultaneous Development<\/h5>\n\n\n\n<p>Finally, Bostrom explains it might be possible to synchronize multiple AI development projects so that when AI becomes superintelligent, there would be many independent superintelligent AIs, all of comparable intelligence and capabilities. They would then keep each other\u2019s power in check, much the way human societies constrain individual power.<\/p>\n\n\n\n<p>However, Bostrom cautions that limiting the power of individual superintelligent AIs doesn\u2019t guarantee that <em>any<\/em> of them will act in the best interests of humankind. Nor does this approach completely eliminate the potential for a single superintelligent AI to take control of the world, because one might eventually achieve dominance over the others.&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-imparting-the-right-imperatives\"><strong>Imparting the Right Imperatives<\/strong><\/h4>\n\n\n\n<p>According to Bostrom, making sure every superintelligent AI has good ultimate motives may be the most important part of AI development. This is because, as we\u2019ve discussed, other control measures are only temporary. Ultimately the superintelligent AI\u2019s own motives will be the only thing that constrains its behavior. Bostrom discusses a number of approaches to programming good motives.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"h-hard-coded-commandments\">Hard-Coded Commandments<\/h5>\n\n\n\n<p>As Bostrom remarks, one approach is to hard-code a set of imperatives that constrain the AI\u2019s behavior. However, he expects that this is not practicable. Human legal codes illustrate the challenges of concretely defining the distinction between acceptable and unacceptable behavior: Even the best legal codes have loopholes, can be misinterpreted or misapplied, and require occasional changes. To write a comprehensive code of conduct for a superintelligent AI that would be universally applicable for all time would be a monumental task, and probably an impossible one.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"h-existing-motives\">Existing Motives<\/h5>\n\n\n\n<p>Another approach that Bostrom discusses is to create a superintelligent AI by increasing the intelligence of an entity that <em>already has good motives<\/em>, rather than trying to program them from scratch. This approach might be an option if superintelligent AI is achieved by the method of brain simulation: Choose a person with exemplary character and scan her brain to create the original model, then run the simulation on a supercomputer that allows it to think much faster than a biological brain.&nbsp;<\/p>\n\n\n\n<p>However, Bostrom points out that there is a risk that nuances of character, like a person\u2019s code of ethics, might not be faithfully preserved in the simulation. Furthermore, even a faithful simulation of someone with good moral character might be tempted to abuse the powers of a superintelligent AI.&nbsp;<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"h-discoverable-ethics\">Discoverable Ethics<\/h5>\n\n\n\n<p>Bostrom concludes that <strong>the best method of endowing a superintelligent AI with good motives will likely be to give it criteria for figuring out what is right and letting it set its own goals.<\/strong> After all, a superintelligent AI would be able to figure out what humans want from it and program itself accordingly better than human programmers could. This approach would also make the superintelligent AI behave somewhat more cautiously, because it would always have some uncertainty about its ultimate goals.<\/p>\n\n\n\n<p>However, Bostrom also notes that (at least as of 2014) no one had developed a rigorous algorithm for this approach, so there\u2019s a risk that this method might not be feasible in practice. And even if we assume that the basic programming problem will eventually be solved, deciding what criteria to give the AI is still a non-trivial problem.&nbsp;<\/p>\n\n\n\n<p>For one thing, if the AI focuses on what its original programmers want, it would prioritize the desires of a few people over all others. It would be more equitable to have it figure out what <em>everyone<\/em> wants and generally take no action on issues that people disagree about. But for any given course of action, there\u2019s probably <em>somebody<\/em> who has a dissenting opinion, so where should the AI draw the line?&nbsp;<\/p>\n\n\n\n<p>Then there\u2019s the problem of humans\u2019 own conflicting desires. For example, maybe one of the programmers on the project is trying to quit smoking. At some level, she wants a cigarette, but she wouldn\u2019t want the AI to pick up on her craving and start smuggling her cigarettes as she\u2019s trying to kick her smoking habit.<\/p>\n\n\n\n<p>Bostrom recounts two possible solutions to this problem. One is to program the AI to account for this. Instead of just figuring out what humans want, have it figure out what humans <em>would<\/em> want if they were more like the people that they want to be. The other is to program the AI to figure out and pursue what is <em>morally right<\/em> instead of what <em>people want<\/em>, per se.&nbsp;<\/p>\n\n\n\n<p>But both solutions entail some risks. Even what people want to want might not be what\u2019s best for them, and even what\u2019s morally best in an abstract sense might not be what they want. Moreover, humans have yet to unanimously agree on a definition or model of morality.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-managing-the-development-schedule\"><strong>Managing the Development Schedule<\/strong><\/h4>\n\n\n\n<p>As we mentioned earlier, Bostrom believes superintelligent AI will probably be developed eventually, regardless of how hard we try to prevent it. However, he also points out an important caveat: There\u2019s a strong correlation between <em>research<\/em> and the <em>rate of progress<\/em> of artificial intelligence systems. Thus, Bostrom advises<strong> stepping up the pace of research into methods of controlling highly intelligent AIs <\/strong>and programming them to pursue wholesome goals<strong> <\/strong>while<strong> reducing our focus on the development of advanced AI itself.<\/strong><\/p>\n\n\n\n<p>This is because the ultimate outcome of developing superintelligent AI depends largely on the <em>order<\/em> in which certain technological breakthroughs are made. If rigorous safeguards are developed before AIs become superintelligent, there\u2019s a good chance the development of superintelligent AI will be beneficial for humankind. But if it\u2019s the other way around, the consequences could be disastrous, as we\u2019ve discussed.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>What&#8217;s Nick Bostrom&#8217;s Superintelligence about? Could artificial intelligence ever surpass human intelligence? According to Oxford philosopher Nick Bostrom&#8217;s book Superintelligence, there\u2019s a very real possibility that AI could one day rival, and then vastly exceed, human intelligence. When and if this happens, the future of humankind will depend more on AI-generated decisions than human decisions. Read below for a brief overview of Superintelligence by Nick Bostrom.<\/p>\n","protected":false},"author":14,"featured_media":86846,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[40,160,24],"tags":[1300],"class_list":["post-115858","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-books","category-science","category-society","tag-superintelligence","","tg-column-two"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v24.3 (Yoast SEO v24.3) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Nick Bostrom&#039;s Superintelligence: Overview &amp; Takeaways - Shortform Books<\/title>\n<meta name=\"description\" content=\"In Nick Bostrom&#039;s Superintelligence, he warns that creating a super AI would be the worst mistake of humanity. Read more in our overview.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Nick Bostrom&#039;s Superintelligence: Overview &amp; Takeaways\" \/>\n<meta property=\"og:description\" content=\"In Nick Bostrom&#039;s Superintelligence, he warns that creating a super AI would be the worst mistake of humanity. Read more in our overview.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/\" \/>\n<meta property=\"og:site_name\" content=\"Shortform Books\" \/>\n<meta property=\"article:published_time\" content=\"2023-10-26T15:51:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-10-26T17:58:13+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/s3.amazonaws.com\/wordpress.shortform.com\/blog\/wp-content\/uploads\/2022\/12\/open-book-black-and-white.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1924\" \/>\n\t<meta property=\"og:image:height\" content=\"1230\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Katie Doll\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Katie Doll\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"15 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/\"},\"author\":{\"name\":\"Katie Doll\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/c3e1b539e89423b544ede91ab2bff937\"},\"headline\":\"Nick Bostrom&#8217;s Superintelligence: Overview &#038; Takeaways\",\"datePublished\":\"2023-10-26T15:51:00+00:00\",\"dateModified\":\"2023-10-26T17:58:13+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/\"},\"wordCount\":3336,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2022\/12\/open-book-black-and-white.jpg\",\"keywords\":[\"Superintelligence\"],\"articleSection\":[\"Books\",\"Science\",\"Society\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/\",\"url\":\"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/\",\"name\":\"Nick Bostrom's Superintelligence: Overview & Takeaways - Shortform Books\",\"isPartOf\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2022\/12\/open-book-black-and-white.jpg\",\"datePublished\":\"2023-10-26T15:51:00+00:00\",\"dateModified\":\"2023-10-26T17:58:13+00:00\",\"description\":\"In Nick Bostrom's Superintelligence, he warns that creating a super AI would be the worst mistake of humanity. Read more in our overview.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/#primaryimage\",\"url\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2022\/12\/open-book-black-and-white.jpg\",\"contentUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2022\/12\/open-book-black-and-white.jpg\",\"width\":1924,\"height\":1230},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.shortform.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Nick Bostrom&#8217;s Superintelligence: Overview &#038; Takeaways\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#website\",\"url\":\"https:\/\/www.shortform.com\/blog\/\",\"name\":\"Shortform Books\",\"description\":\"The World&#039;s Best Book Summaries\",\"publisher\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.shortform.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\",\"name\":\"Shortform Books\",\"url\":\"https:\/\/www.shortform.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png\",\"contentUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png\",\"width\":500,\"height\":74,\"caption\":\"Shortform Books\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/c3e1b539e89423b544ede91ab2bff937\",\"name\":\"Katie Doll\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/6239731a3fc739640b80be30f2b1727a055d3535d0ee4569e8282faa323e47fc?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/6239731a3fc739640b80be30f2b1727a055d3535d0ee4569e8282faa323e47fc?s=96&d=mm&r=g\",\"caption\":\"Katie Doll\"},\"description\":\"Somehow, Katie was able to pull off her childhood dream of creating a career around books after graduating with a degree in English and a concentration in Creative Writing. Her preferred genre of books has changed drastically over the years, from fantasy\/dystopian young-adult to moving novels and non-fiction books on the human experience. Katie especially enjoys reading and writing about all things television, good and bad.\",\"knowsAbout\":[\"Bachelor of Arts in English With a Concentration in Creative Writing\"],\"jobTitle\":\"Senior SEO Writer\",\"worksFor\":\"Shortform\",\"url\":\"https:\/\/www.shortform.com\/blog\/author\/katie\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Nick Bostrom's Superintelligence: Overview & Takeaways - Shortform Books","description":"In Nick Bostrom's Superintelligence, he warns that creating a super AI would be the worst mistake of humanity. Read more in our overview.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/","og_locale":"en_US","og_type":"article","og_title":"Nick Bostrom's Superintelligence: Overview & Takeaways","og_description":"In Nick Bostrom's Superintelligence, he warns that creating a super AI would be the worst mistake of humanity. Read more in our overview.","og_url":"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/","og_site_name":"Shortform Books","article_published_time":"2023-10-26T15:51:00+00:00","article_modified_time":"2023-10-26T17:58:13+00:00","og_image":[{"width":1924,"height":1230,"url":"https:\/\/s3.amazonaws.com\/wordpress.shortform.com\/blog\/wp-content\/uploads\/2022\/12\/open-book-black-and-white.jpg","type":"image\/jpeg"}],"author":"Katie Doll","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Katie Doll","Est. reading time":"15 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/#article","isPartOf":{"@id":"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/"},"author":{"name":"Katie Doll","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/c3e1b539e89423b544ede91ab2bff937"},"headline":"Nick Bostrom&#8217;s Superintelligence: Overview &#038; Takeaways","datePublished":"2023-10-26T15:51:00+00:00","dateModified":"2023-10-26T17:58:13+00:00","mainEntityOfPage":{"@id":"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/"},"wordCount":3336,"commentCount":0,"publisher":{"@id":"https:\/\/www.shortform.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/#primaryimage"},"thumbnailUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2022\/12\/open-book-black-and-white.jpg","keywords":["Superintelligence"],"articleSection":["Books","Science","Society"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/","url":"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/","name":"Nick Bostrom's Superintelligence: Overview & Takeaways - Shortform Books","isPartOf":{"@id":"https:\/\/www.shortform.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/#primaryimage"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/#primaryimage"},"thumbnailUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2022\/12\/open-book-black-and-white.jpg","datePublished":"2023-10-26T15:51:00+00:00","dateModified":"2023-10-26T17:58:13+00:00","description":"In Nick Bostrom's Superintelligence, he warns that creating a super AI would be the worst mistake of humanity. Read more in our overview.","breadcrumb":{"@id":"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/#primaryimage","url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2022\/12\/open-book-black-and-white.jpg","contentUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2022\/12\/open-book-black-and-white.jpg","width":1924,"height":1230},{"@type":"BreadcrumbList","@id":"https:\/\/www.shortform.com\/blog\/nick-bostrom-superintelligence\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.shortform.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Nick Bostrom&#8217;s Superintelligence: Overview &#038; Takeaways"}]},{"@type":"WebSite","@id":"https:\/\/www.shortform.com\/blog\/#website","url":"https:\/\/www.shortform.com\/blog\/","name":"Shortform Books","description":"The World&#039;s Best Book Summaries","publisher":{"@id":"https:\/\/www.shortform.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.shortform.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.shortform.com\/blog\/#organization","name":"Shortform Books","url":"https:\/\/www.shortform.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png","contentUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png","width":500,"height":74,"caption":"Shortform Books"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/c3e1b539e89423b544ede91ab2bff937","name":"Katie Doll","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/6239731a3fc739640b80be30f2b1727a055d3535d0ee4569e8282faa323e47fc?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/6239731a3fc739640b80be30f2b1727a055d3535d0ee4569e8282faa323e47fc?s=96&d=mm&r=g","caption":"Katie Doll"},"description":"Somehow, Katie was able to pull off her childhood dream of creating a career around books after graduating with a degree in English and a concentration in Creative Writing. Her preferred genre of books has changed drastically over the years, from fantasy\/dystopian young-adult to moving novels and non-fiction books on the human experience. Katie especially enjoys reading and writing about all things television, good and bad.","knowsAbout":["Bachelor of Arts in English With a Concentration in Creative Writing"],"jobTitle":"Senior SEO Writer","worksFor":"Shortform","url":"https:\/\/www.shortform.com\/blog\/author\/katie\/"}]}},"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2022\/12\/open-book-black-and-white.jpg","_links":{"self":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/115858","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/users\/14"}],"replies":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/comments?post=115858"}],"version-history":[{"count":3,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/115858\/revisions"}],"predecessor-version":[{"id":116090,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/115858\/revisions\/116090"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/media\/86846"}],"wp:attachment":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/media?parent=115858"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/categories?post=115858"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/tags?post=115858"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}