{"id":115876,"date":"2023-10-21T14:57:00","date_gmt":"2023-10-21T18:57:00","guid":{"rendered":"https:\/\/www.shortform.com\/blog\/?p=115876"},"modified":"2023-10-26T13:58:02","modified_gmt":"2023-10-26T17:58:02","slug":"regulation-of-ai","status":"publish","type":"post","link":"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/","title":{"rendered":"Regulation of AI: 4 Ways to Impose Limits on Superintelligence"},"content":{"rendered":"\n<p>What type of regulation should be put on AI? Is it a waste of time to put limitations on AI?<\/p>\n\n\n\n<p>In <em>Superintelligence<\/em>, Nick Bostrom cautions that a <a href=\"https:\/\/www.shortform.com\/blog\/superintelligent-ai\/\">superintelligent AI<\/a> would eventually be able to circumvent any controls or limitations that humans placed upon it. However, that doesn&#8217;t mean imposing limits is a waste of time.<\/p>\n\n\n\n<p>Here&#8217;s how to conduct proper regulation of AI.<\/p>\n\n\n\n<!--more-->\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-1-physical-containment\">1. Physical Containment<\/h2>\n\n\n\n<p>One regulation of AI is simply to develop AI on a computer of limited power that is physically isolated so it can\u2019t connect to the internet. In principle, this could allow us to study superintelligent AIs in isolation until we understand how to deploy them safely in other contexts.<\/p>\n\n\n\n<p>However, in practice, this is still risky. It might be difficult to assess just how intelligent an AI under study has become. A superintelligent AI would probably realize it was being held in confinement and figure out what kind of behavior the human researchers were trying to induce. It might then feign docile or dumb behavior to put them at ease, or find other ways to manipulate them into giving it access to additional hardware. Between manipulating humans and finding novel ways to use the hardware at its disposal, a sufficiently intelligent AI could eventually circumvent physical containment measures.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>Applying Physical Containment<\/strong><br><br>Based on Bostrom\u2019s description, to make physical containment work, we need a way to accurately assess an AI\u2019s capabilities and motives before it evolves enough to circumvent the containment measures. This is problematic because, despite the progress in AI over the last decade, <a href=\"https:\/\/www.nature.com\/immersive\/d41586-023-02822-z\/index.html\" target=\"_blank\" rel=\"noreferrer noopener\">scientists have yet to develop a reliable method for measuring the intelligence of AI.<\/a><br><br>Many tests have been proposed. The most famous is the Turing test, which relies on human intuition to discern between a human and a machine. Other tests attempt to measure reasoning capability based on the ability to complete graphical puzzles or infer implied meanings from sentences. But so far, all these tests leave something to be desired\u2014in many cases, computer programs can beat humans at the tests even though it seems intuitively clear that the algorithms don\u2019t have anything close to human-level intelligence.<br><br>Part of the problem is that most of the intelligence tests scientists have devised to date are well-documented in scientific journals, and LLMs incorporate essentially everything ever written into their training data. Thus, testing the AI is like giving a test to a student who has memorized the answer key: She can give the right answers even if she has no understanding of the material they\u2019re supposed to test.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-2-manual-review\">2. Manual Review<\/h2>\n\n\n\n<p>Bostrom notes that many people have suggested simply programming the AI to ask for permission from its human operators before it does anything. Instead of allowing it to make improvements to its own programming any time, the program could require <a href=\"https:\/\/www.shortform.com\/blog\/human-intervention\/\">human intervention<\/a> to approve each new version. This would give programmers a chance to look over the code, giving them an idea of what additional capabilities the new version would have and allowing them to suspend the AI\u2019s development at any stage. Similarly, the AI\u2019s programming could require human intervention before any action that could affect the outside world.&nbsp;<\/p>\n\n\n\n<p>However, as the AI\u2019s intelligence advanced beyond the human level, eventually human programmers wouldn\u2019t be able to understand the code it proposed well enough to accurately assess what new capabilities and risks it would add.&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>Applying Manual Review<\/strong><br><br>Even before an AI becomes appreciably more intelligent than its human designers, manual review would likely have to be combined with another control, such as physical containment, in order to provide an effective safeguard. This is because, as Peter Thiel, notes, AI development\u2014like all other R&amp;D and first-of-a-kind projects\u2014involves its share of <a href=\"https:\/\/www.shortform.com\/app\/book\/zero-to-one\" target=\"_blank\" rel=\"noreferrer noopener\">unknown unknowns and unanticipated results<\/a>.<br><br>If the AI proposes a novel change to its code, the <em>full<\/em> effect of the change may not become apparent until the code is actually compiled and executed. If it could be evaluated safely in containment, this testing could be part of the \u201creview\u201d process. But without such additional controls in place, testing could be extremely dangerous, given the potentially destructive power of AIs that we discussed in the previous section.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-3-reward-and-punishment-signals\">3. Reward and Punishment Signals<\/h2>\n\n\n\n<p>Another option that Bostrom discusses is to program the AI to respond to rewards and punishments. You could build a computer system with a reward button and a punishment button and program the AI to minimize the number of punishment signals it receives and maximize the number of reward signals. This would be easier to program than trying to translate \u201cjust do whatever your operators want you to do\u201d into computer code, and it would achieve the same result.<\/p>\n\n\n\n<p>The risk, Bostrom explains, is that the AI might eventually circumvent the system. For example, maybe it builds a robot to push the reward button constantly and finds a way to keep humans out of the building so the punishment button cannot be pressed.<\/p>\n\n\n\n<p>And if it worked correctly, giving the human operators full control over the AI, that would create another risk: As we\u2019ve discussed, a superintelligent AI would be immensely powerful. Human operators might be tempted to abuse that power.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>Applying Rewards and Punishments<\/strong><br><br>In <a href=\"https:\/\/shortform.com\/app\/book\/carrots-and-sticks-don-t-work\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Carrots and Sticks Don\u2019t Work<\/em><\/a>, Paul Marciano argues that traditional reward-and-punishment systems are outdated and are <a href=\"https:\/\/shortform.com\/app\/book\/carrots-and-sticks-don-t-work\/chapter-1\" target=\"_blank\" rel=\"noreferrer noopener\">no longer effective in the modern workplace<\/a><em>.<\/em> Leaders once relied, fairly successfully, on corporal punishment to control manual laborers (many of whom were slaves or criminals) or on rewards to motivate factory workers. But as the nature of work has become more mentally intensive, workers\u2019 needs and values have evolved to the point where a different approach is needed.<br><br>It may be worth considering whether AI\u2019s motives could similarly evolve such that traditional rewards and punishments would no longer be effective methods of control. Marciano\u2019s approach to management (which is based on building employee trust through supportive feedback, recognition, and empowerment) wouldn\u2019t necessarily work on AI, since AI might not develop the same values as a human thought worker. But perhaps programmers could take a conceptually similar approach of adapting rewards and punishments as the AI advanced.&nbsp;<br><br>Again, this approach to control would likely have to be combined with physical containment, so that researchers could study the AI enough to learn how to manage it effectively before turning it loose on the world. If it could be done effectively, this might provide a solution to the risk Bostrom describes of the AI finding ways to game the reward-and-punishment system.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-4-simultaneous-development\">4. Simultaneous Development<\/h2>\n\n\n\n<p>Finally, Bostrom explains it might be possible to synchronize multiple AI development projects so that when AI becomes superintelligent, there would be many independent superintelligent AIs, all of comparable intelligence and capabilities. They would then keep each other\u2019s power in check, much the way human societies constrain individual power.<\/p>\n\n\n\n<p>However, Bostrom cautions that limiting the power of individual superintelligent AIs doesn\u2019t guarantee that <em>any<\/em> of them will act in the best interests of humankind. Nor does this approach completely eliminate the potential for a single superintelligent AI to take control of the world, because one might eventually achieve dominance over the others.&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>Applying Simultaneous Development<\/strong><br><br>As Bostrom notes, simultaneous development controls wouldn\u2019t give humans control of AIs, per se. But if reward-and-punishment controls (or other methods) proved effective for giving human operators control of superintelligent AIs, simultaneous development controls <em>could<\/em> be used to mitigate the risk of human operators abusing the superintelligent AI\u2019s powers.&nbsp;<br><br>Each team of human operators would naturally direct their AI to act in their own best interests, and different teams would act to check and balance each others\u2019 power. If there were enough teams with AIs of equal power to faithfully represent everyone\u2019s interests, then the AIs would only be used to further humanity\u2019s mutual best interests.<br><br>However, since this approach depends both on synchronizing the development of superintelligent AIs <em>and<\/em> on maintaining human control of them, it might end up being a fragile balance of power, and one that would probably only work temporarily.<\/td><\/tr><\/tbody><\/table><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>What type of regulation should be put on AI? Is it a waste of time to put limitations on AI? In Superintelligence, Nick Bostrom cautions that a superintelligent AI would eventually be able to circumvent any controls or limitations that humans placed upon it. However, that doesn&#8217;t mean imposing limits is a waste of time. Here&#8217;s how to conduct proper regulation of AI.<\/p>\n","protected":false},"author":14,"featured_media":30927,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[20,160,24],"tags":[1300],"class_list":["post-115876","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","category-science","category-society","tag-superintelligence","","tg-column-two"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v24.3 (Yoast SEO v24.3) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Regulation of AI: 4 Ways to Impose Limits on Superintelligence - Shortform Books<\/title>\n<meta name=\"description\" content=\"If AI will be smarter than humans, why bother trying to stop it? There are still ways to propose a safe regulation of AI. Check them out.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Regulation of AI: 4 Ways to Impose Limits on Superintelligence\" \/>\n<meta property=\"og:description\" content=\"If AI will be smarter than humans, why bother trying to stop it? There are still ways to propose a safe regulation of AI. Check them out.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"Shortform Books\" \/>\n<meta property=\"article:published_time\" content=\"2023-10-21T18:57:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-10-26T17:58:02+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/s3.amazonaws.com\/wordpress.shortform.com\/blog\/wp-content\/uploads\/2021\/04\/AI-robot-technology-artificial-intelligence.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1080\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Katie Doll\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Katie Doll\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/\"},\"author\":{\"name\":\"Katie Doll\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/c3e1b539e89423b544ede91ab2bff937\"},\"headline\":\"Regulation of AI: 4 Ways to Impose Limits on Superintelligence\",\"datePublished\":\"2023-10-21T18:57:00+00:00\",\"dateModified\":\"2023-10-26T17:58:02+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/\"},\"wordCount\":1369,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/04\/AI-robot-technology-artificial-intelligence.jpg\",\"keywords\":[\"Superintelligence\"],\"articleSection\":[\"Ethics\",\"Science\",\"Society\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/\",\"url\":\"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/\",\"name\":\"Regulation of AI: 4 Ways to Impose Limits on Superintelligence - Shortform Books\",\"isPartOf\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/04\/AI-robot-technology-artificial-intelligence.jpg\",\"datePublished\":\"2023-10-21T18:57:00+00:00\",\"dateModified\":\"2023-10-26T17:58:02+00:00\",\"description\":\"If AI will be smarter than humans, why bother trying to stop it? There are still ways to propose a safe regulation of AI. Check them out.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/#primaryimage\",\"url\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/04\/AI-robot-technology-artificial-intelligence.jpg\",\"contentUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/04\/AI-robot-technology-artificial-intelligence.jpg\",\"width\":1920,\"height\":1080},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.shortform.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Regulation of AI: 4 Ways to Impose Limits on Superintelligence\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#website\",\"url\":\"https:\/\/www.shortform.com\/blog\/\",\"name\":\"Shortform Books\",\"description\":\"The World&#039;s Best Book Summaries\",\"publisher\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.shortform.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\",\"name\":\"Shortform Books\",\"url\":\"https:\/\/www.shortform.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png\",\"contentUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png\",\"width\":500,\"height\":74,\"caption\":\"Shortform Books\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/c3e1b539e89423b544ede91ab2bff937\",\"name\":\"Katie Doll\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/6239731a3fc739640b80be30f2b1727a055d3535d0ee4569e8282faa323e47fc?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/6239731a3fc739640b80be30f2b1727a055d3535d0ee4569e8282faa323e47fc?s=96&d=mm&r=g\",\"caption\":\"Katie Doll\"},\"description\":\"Somehow, Katie was able to pull off her childhood dream of creating a career around books after graduating with a degree in English and a concentration in Creative Writing. Her preferred genre of books has changed drastically over the years, from fantasy\/dystopian young-adult to moving novels and non-fiction books on the human experience. Katie especially enjoys reading and writing about all things television, good and bad.\",\"knowsAbout\":[\"Bachelor of Arts in English With a Concentration in Creative Writing\"],\"jobTitle\":\"Senior SEO Writer\",\"worksFor\":\"Shortform\",\"url\":\"https:\/\/www.shortform.com\/blog\/author\/katie\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Regulation of AI: 4 Ways to Impose Limits on Superintelligence - Shortform Books","description":"If AI will be smarter than humans, why bother trying to stop it? There are still ways to propose a safe regulation of AI. Check them out.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/","og_locale":"en_US","og_type":"article","og_title":"Regulation of AI: 4 Ways to Impose Limits on Superintelligence","og_description":"If AI will be smarter than humans, why bother trying to stop it? There are still ways to propose a safe regulation of AI. Check them out.","og_url":"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/","og_site_name":"Shortform Books","article_published_time":"2023-10-21T18:57:00+00:00","article_modified_time":"2023-10-26T17:58:02+00:00","og_image":[{"width":1920,"height":1080,"url":"https:\/\/s3.amazonaws.com\/wordpress.shortform.com\/blog\/wp-content\/uploads\/2021\/04\/AI-robot-technology-artificial-intelligence.jpg","type":"image\/jpeg"}],"author":"Katie Doll","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Katie Doll","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/#article","isPartOf":{"@id":"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/"},"author":{"name":"Katie Doll","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/c3e1b539e89423b544ede91ab2bff937"},"headline":"Regulation of AI: 4 Ways to Impose Limits on Superintelligence","datePublished":"2023-10-21T18:57:00+00:00","dateModified":"2023-10-26T17:58:02+00:00","mainEntityOfPage":{"@id":"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/"},"wordCount":1369,"commentCount":0,"publisher":{"@id":"https:\/\/www.shortform.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/04\/AI-robot-technology-artificial-intelligence.jpg","keywords":["Superintelligence"],"articleSection":["Ethics","Science","Society"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.shortform.com\/blog\/regulation-of-ai\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/","url":"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/","name":"Regulation of AI: 4 Ways to Impose Limits on Superintelligence - Shortform Books","isPartOf":{"@id":"https:\/\/www.shortform.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/#primaryimage"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/04\/AI-robot-technology-artificial-intelligence.jpg","datePublished":"2023-10-21T18:57:00+00:00","dateModified":"2023-10-26T17:58:02+00:00","description":"If AI will be smarter than humans, why bother trying to stop it? There are still ways to propose a safe regulation of AI. Check them out.","breadcrumb":{"@id":"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.shortform.com\/blog\/regulation-of-ai\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/#primaryimage","url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/04\/AI-robot-technology-artificial-intelligence.jpg","contentUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/04\/AI-robot-technology-artificial-intelligence.jpg","width":1920,"height":1080},{"@type":"BreadcrumbList","@id":"https:\/\/www.shortform.com\/blog\/regulation-of-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.shortform.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Regulation of AI: 4 Ways to Impose Limits on Superintelligence"}]},{"@type":"WebSite","@id":"https:\/\/www.shortform.com\/blog\/#website","url":"https:\/\/www.shortform.com\/blog\/","name":"Shortform Books","description":"The World&#039;s Best Book Summaries","publisher":{"@id":"https:\/\/www.shortform.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.shortform.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.shortform.com\/blog\/#organization","name":"Shortform Books","url":"https:\/\/www.shortform.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png","contentUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png","width":500,"height":74,"caption":"Shortform Books"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/c3e1b539e89423b544ede91ab2bff937","name":"Katie Doll","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/6239731a3fc739640b80be30f2b1727a055d3535d0ee4569e8282faa323e47fc?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/6239731a3fc739640b80be30f2b1727a055d3535d0ee4569e8282faa323e47fc?s=96&d=mm&r=g","caption":"Katie Doll"},"description":"Somehow, Katie was able to pull off her childhood dream of creating a career around books after graduating with a degree in English and a concentration in Creative Writing. Her preferred genre of books has changed drastically over the years, from fantasy\/dystopian young-adult to moving novels and non-fiction books on the human experience. Katie especially enjoys reading and writing about all things television, good and bad.","knowsAbout":["Bachelor of Arts in English With a Concentration in Creative Writing"],"jobTitle":"Senior SEO Writer","worksFor":"Shortform","url":"https:\/\/www.shortform.com\/blog\/author\/katie\/"}]}},"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/04\/AI-robot-technology-artificial-intelligence.jpg","_links":{"self":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/115876","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/users\/14"}],"replies":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/comments?post=115876"}],"version-history":[{"count":4,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/115876\/revisions"}],"predecessor-version":[{"id":116101,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/115876\/revisions\/116101"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/media\/30927"}],"wp:attachment":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/media?parent=115876"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/categories?post=115876"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/tags?post=115876"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}