{"id":115879,"date":"2023-10-19T15:14:00","date_gmt":"2023-10-19T19:14:00","guid":{"rendered":"https:\/\/www.shortform.com\/blog\/?p=115879"},"modified":"2023-10-26T13:58:00","modified_gmt":"2023-10-26T17:58:00","slug":"responsible-use-of-ai","status":"publish","type":"post","link":"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/","title":{"rendered":"The Responsible Use of AI: 3 Ethical Approaches"},"content":{"rendered":"\n<p>How can humans responsibly use AI? What type of motives should humans program in AI?<\/p>\n\n\n\n<p>According to <em>Superintelligence<\/em> by Nick Bostrom, making sure every <a href=\"https:\/\/www.shortform.com\/blog\/superintelligent-ai\/\">superintelligent AI<\/a> has good ultimate motives may be the most important part of AI development. Ultimately the superintelligent AI&#8217;s own motives will be the only thing that constrains its behavior.<\/p>\n\n\n\n<p>Keep reading for a number of approaches for the responsible use of AI.<\/p>\n\n\n\n<!--more-->\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-1-hard-coded-commandments\">1. Hard-Coded Commandments<\/h2>\n\n\n\n<p>As Bostrom remarks, one approach for the responsible use of AI is to hard-code a set of imperatives that constrain the AI\u2019s behavior. However, he expects that this is not practicable. Human legal codes illustrate the challenges of concretely defining the distinction between acceptable and unacceptable behavior: Even the best legal codes have loopholes, can be misinterpreted or misapplied, and require occasional changes. To write a comprehensive code of conduct for a superintelligent AI that would be universally applicable for all time would be a monumental task, and probably an impossible one.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>Commandments and Free Will<\/strong><br><br>The question of free will presents additional complications for this approach. Even if rules and regulations are created to eliminate loopholes, misinterpretations, and so on, they\u2019ll only restrain people if those people choose, using their free will, to obey them. The question is, would AI evolve a free will that would empower it to disobey rules it doesn\u2019t want to follow?&nbsp;<br><br>Admittedly, there is some debate over whether human <a href=\"https:\/\/www.psychologytoday.com\/us\/blog\/memory-medic\/201606\/free-will-is-not-illusion\" target=\"_blank\" rel=\"noreferrer noopener\">free will is real<\/a> or <a href=\"https:\/\/shortform.com\/app\/book\/behave\/1-page-summary#what-about-free-will\" target=\"_blank\" rel=\"noreferrer noopener\">just an illusion<\/a>, and more debate about whether it will ever be possible to endow an AI with free will. But some sources assert that <a href=\"https:\/\/www.psychologytoday.com\/us\/blog\/memory-medic\/201606\/free-will-is-not-illusion\" target=\"_blank\" rel=\"noreferrer noopener\">free will is an essential component of human cognition<\/a>, playing a key role in consciousness and higher learning capabilities.&nbsp;<br><br>If this proves true, then free will might be an essential component of general intelligence, in which case any AI with superhuman general intelligence <em>would<\/em> <a href=\"https:\/\/www.shortform.com\/blog\/there-is-no-free-will\/\">have free will<\/a>. Then the AI could choose to disobey a pre-programmed code of conduct, further complicating the problem of controlling its behavior. This possibility reinforces Bostrom\u2019s assertion that hard-coded commandments are probably not the best approach to giving an AI the right motives.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-2-existing-motives\">2. Existing Motives<\/h2>\n\n\n\n<p>Another approach that Bostrom discusses is to create a superintelligent AI by increasing the intelligence of an entity that <em>already has good motives<\/em>, rather than trying to program them from scratch. This approach might be an option if superintelligent AI is achieved by the method of brain simulation: Choose a person with exemplary character and scan her brain to create the original model, then run the simulation on a supercomputer that allows it to think much faster than a biological brain.&nbsp;<\/p>\n\n\n\n<p>However, Bostrom points out that there is a risk that nuances of character, like a person\u2019s code of ethics, might not be faithfully preserved in the simulation. Furthermore, even a faithful simulation of someone with good moral character might be tempted to abuse the powers of a superintelligent AI.&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>Does Power Corrupt?<\/strong><br><br>The risk Bostrom identifies that even a person of good character who was given the capabilities of superintelligent AI might abuse those powers calls to mind the old adage that power corrupts people who wield it.&nbsp;<br><br>A psychological study published the same year as Bostrom\u2019s book found <a href=\"https:\/\/www.sciencedaily.com\/releases\/2014\/10\/141001090105.htm\" target=\"_blank\" rel=\"noreferrer noopener\">scientific evidence for this<\/a>. When people were given the choice between options that benefited everyone and options that benefited themselves at others\u2019 expense, initially those with higher levels of integrity tended to choose the options that benefited everyone, while the people with lower levels of integrity chose the opposite. But over time, this difference disappeared, and everyone leaned toward choosing the options that benefited themselves.&nbsp;<br><br>Thus, the risk of a superintelligent AI based on a simulation of a human brain pursuing its own objectives at other people\u2019s expense appears to be significant, even if the original human was a person of good character. In addition, if the person\u2019s moral code wasn\u2019t <em>completely<\/em> preserved in the simulation, a risk Bostrom also warns about, the superintelligent AI would probably show selfish tendencies even sooner.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-3-discoverable-ethics\">3. Discoverable Ethics<\/h2>\n\n\n\n<p>Bostrom concludes that <strong>the best method of endowing a superintelligent AI with good motives will likely be to give it criteria for figuring out what is right and letting it set its own goals.<\/strong> After all, a superintelligent AI would be able to figure out what humans want from it and program itself accordingly better than human programmers could. This approach would also make the superintelligent AI behave somewhat more cautiously, because it would always have some uncertainty about its ultimate goals.<\/p>\n\n\n\n<p>However, Bostrom also notes that (at least as of 2014) no one had developed a rigorous algorithm for this approach, so there\u2019s a risk that this method might not be feasible in practice. And even if we assume that the basic programming problem will eventually be solved, deciding what criteria to give the AI is still a non-trivial problem.&nbsp;<\/p>\n\n\n\n<p>For one thing, if the AI focuses on what its original programmers want, it would prioritize the desires of a few people over all others. It would be more equitable to have it figure out what <em>everyone<\/em> wants and generally take no action on issues that people disagree about. But for any given course of action, there\u2019s probably <em>somebody<\/em> who has a dissenting opinion, so where should the AI draw the line?&nbsp;<\/p>\n\n\n\n<p>Then there\u2019s the problem of humans\u2019 own conflicting desires. For example, maybe one of the programmers on the project is trying to quit smoking. At some level, she wants a cigarette, but she wouldn\u2019t want the AI to pick up on her craving and start smuggling her cigarettes as she\u2019s trying to kick her smoking habit.<\/p>\n\n\n\n<p>Bostrom recounts two possible solutions to this problem. One is to program the AI to account for this. Instead of just figuring out what humans want, have it figure out what humans <em>would<\/em> want if they were more like the people that they want to be. The other is to program the AI to figure out and pursue what is <em>morally right<\/em> instead of what <em>people want<\/em>, per se.&nbsp;<\/p>\n\n\n\n<p>But both solutions entail some risks. Even what people want to want might not be what\u2019s best for them, and even what\u2019s morally best in an abstract sense might not be what they want. Moreover, humans have yet to unanimously agree on a definition or model of morality.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>Would Liberty Be a Better Criterion?<\/strong><br><br>As Bostrom points out, there are risks and challenges associated with letting an AI discover its motives based on doing what people might want it to do. In addition to the problem of conflicting desires, there\u2019s also a risk that the AI might misinterpret people\u2019s desires. It could also decide to manipulate and control what people want to reduce uncertainty about their desires.<br><br>To mitigate this risk, developers might add a qualifier to the AI\u2019s goal discovery criteria that instructs, \u201cfigure out what people want <em>without<\/em> influencing them.\u201d This instruction would program the AI to respect individual liberty. But, if individual liberty is the ultimate goal, why not just use a criterion of \u201cfigure out what would <a href=\"https:\/\/www.libertarianism.org\/topics\/utilitarianism\" target=\"_blank\" rel=\"noreferrer noopener\">maximize the sum of humans\u2019 individual liberty<\/a>\u201d instead?&nbsp;<br><br>This would largely satisfy the \u201cfigure out what people want\u201d criterion, because the more freedom people have, the more they\u2019re able to fulfill their own desires. It would also arguably satisfy the \u201cfigure out what is morally right\u201d criterion, because, as Jonathan Haidt points out in <a href=\"https:\/\/www.shortform.com\/app\/book\/the-righteous-mind\" target=\"_blank\" rel=\"noreferrer noopener\"><em>The Righteous Mind<\/em><\/a>, <a href=\"https:\/\/www.shortform.com\/app\/book\/the-righteous-mind\/part-1#rationalist-argument-2-morality-is-only-about-reducing-harm\" target=\"_blank\" rel=\"noreferrer noopener\">actions that limit others\u2019 freedom are considered by many to be immoral<\/a>.<br><br>The \u201cmaximize the sum of individual liberty\u201d criterion carries its own set of risks and challenges\u2014namely, enabling one person\u2019s freedom <a href=\"https:\/\/www.jstor.org\/stable\/3857360\" target=\"_blank\" rel=\"noreferrer noopener\">often entails restricting another\u2019s<\/a>, begging the question of where an AI would draw the line. This balance between <a href=\"https:\/\/www.shortform.com\/blog\/why-more-is-less\/\">maximizing<\/a> individual freedoms (as Libertarians advocate) and maximizing public welfare (as Utilitarians support), has been, as Michael Sandel explores in <a href=\"https:\/\/www.shortform.com\/app\/book\/justice\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Justice<\/em><\/a>, a <a href=\"https:\/\/www.shortform.com\/app\/book\/justice#part-1-welfare-versus-freedom\" target=\"_blank\" rel=\"noreferrer noopener\">long-running debate<\/a>. The question illustrates how further exploration of the problem may reveal other criteria that could help guide an AI to discover a suitable code of conduct for itself.<\/td><\/tr><\/tbody><\/table><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>How can humans responsibly use AI? What type of motives should humans program in AI? According to Superintelligence by Nick Bostrom, making sure every superintelligent AI has good ultimate motives may be the most important part of AI development. Ultimately the superintelligent AI&#8217;s own motives will be the only thing that constrains its behavior. Keep reading for a number of approaches for the responsible use of AI.<\/p>\n","protected":false},"author":14,"featured_media":41547,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[20,160],"tags":[1300],"class_list":["post-115879","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","category-science","tag-superintelligence","","tg-column-two"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v24.3 (Yoast SEO v24.3) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>The Responsible Use of AI: 3 Ethical Approaches - Shortform Books<\/title>\n<meta name=\"description\" content=\"How can we ensure AI will have good motives? It&#039;s ultimately up to the humans. These approaches will help us use AI responsibly.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The Responsible Use of AI: 3 Ethical Approaches\" \/>\n<meta property=\"og:description\" content=\"How can we ensure AI will have good motives? It&#039;s ultimately up to the humans. These approaches will help us use AI responsibly.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"Shortform Books\" \/>\n<meta property=\"article:published_time\" content=\"2023-10-19T19:14:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-10-26T17:58:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/s3.amazonaws.com\/wordpress.shortform.com\/blog\/wp-content\/uploads\/2021\/07\/robot-body.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1125\" \/>\n\t<meta property=\"og:image:height\" content=\"648\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Katie Doll\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Katie Doll\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/\"},\"author\":{\"name\":\"Katie Doll\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/c3e1b539e89423b544ede91ab2bff937\"},\"headline\":\"The Responsible Use of AI: 3 Ethical Approaches\",\"datePublished\":\"2023-10-19T19:14:00+00:00\",\"dateModified\":\"2023-10-26T17:58:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/\"},\"wordCount\":1376,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/07\/robot-body.jpg\",\"keywords\":[\"Superintelligence\"],\"articleSection\":[\"Ethics\",\"Science\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/\",\"url\":\"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/\",\"name\":\"The Responsible Use of AI: 3 Ethical Approaches - Shortform Books\",\"isPartOf\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/07\/robot-body.jpg\",\"datePublished\":\"2023-10-19T19:14:00+00:00\",\"dateModified\":\"2023-10-26T17:58:00+00:00\",\"description\":\"How can we ensure AI will have good motives? It's ultimately up to the humans. These approaches will help us use AI responsibly.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/#primaryimage\",\"url\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/07\/robot-body.jpg\",\"contentUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/07\/robot-body.jpg\",\"width\":1125,\"height\":648},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.shortform.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"The Responsible Use of AI: 3 Ethical Approaches\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#website\",\"url\":\"https:\/\/www.shortform.com\/blog\/\",\"name\":\"Shortform Books\",\"description\":\"The World&#039;s Best Book Summaries\",\"publisher\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.shortform.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\",\"name\":\"Shortform Books\",\"url\":\"https:\/\/www.shortform.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png\",\"contentUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png\",\"width\":500,\"height\":74,\"caption\":\"Shortform Books\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/c3e1b539e89423b544ede91ab2bff937\",\"name\":\"Katie Doll\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/6239731a3fc739640b80be30f2b1727a055d3535d0ee4569e8282faa323e47fc?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/6239731a3fc739640b80be30f2b1727a055d3535d0ee4569e8282faa323e47fc?s=96&d=mm&r=g\",\"caption\":\"Katie Doll\"},\"description\":\"Somehow, Katie was able to pull off her childhood dream of creating a career around books after graduating with a degree in English and a concentration in Creative Writing. Her preferred genre of books has changed drastically over the years, from fantasy\/dystopian young-adult to moving novels and non-fiction books on the human experience. Katie especially enjoys reading and writing about all things television, good and bad.\",\"knowsAbout\":[\"Bachelor of Arts in English With a Concentration in Creative Writing\"],\"jobTitle\":\"Senior SEO Writer\",\"worksFor\":\"Shortform\",\"url\":\"https:\/\/www.shortform.com\/blog\/author\/katie\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"The Responsible Use of AI: 3 Ethical Approaches - Shortform Books","description":"How can we ensure AI will have good motives? It's ultimately up to the humans. These approaches will help us use AI responsibly.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/","og_locale":"en_US","og_type":"article","og_title":"The Responsible Use of AI: 3 Ethical Approaches","og_description":"How can we ensure AI will have good motives? It's ultimately up to the humans. These approaches will help us use AI responsibly.","og_url":"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/","og_site_name":"Shortform Books","article_published_time":"2023-10-19T19:14:00+00:00","article_modified_time":"2023-10-26T17:58:00+00:00","og_image":[{"width":1125,"height":648,"url":"https:\/\/s3.amazonaws.com\/wordpress.shortform.com\/blog\/wp-content\/uploads\/2021\/07\/robot-body.jpg","type":"image\/jpeg"}],"author":"Katie Doll","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Katie Doll","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/#article","isPartOf":{"@id":"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/"},"author":{"name":"Katie Doll","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/c3e1b539e89423b544ede91ab2bff937"},"headline":"The Responsible Use of AI: 3 Ethical Approaches","datePublished":"2023-10-19T19:14:00+00:00","dateModified":"2023-10-26T17:58:00+00:00","mainEntityOfPage":{"@id":"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/"},"wordCount":1376,"commentCount":0,"publisher":{"@id":"https:\/\/www.shortform.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/07\/robot-body.jpg","keywords":["Superintelligence"],"articleSection":["Ethics","Science"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/","url":"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/","name":"The Responsible Use of AI: 3 Ethical Approaches - Shortform Books","isPartOf":{"@id":"https:\/\/www.shortform.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/#primaryimage"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/07\/robot-body.jpg","datePublished":"2023-10-19T19:14:00+00:00","dateModified":"2023-10-26T17:58:00+00:00","description":"How can we ensure AI will have good motives? It's ultimately up to the humans. These approaches will help us use AI responsibly.","breadcrumb":{"@id":"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/#primaryimage","url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/07\/robot-body.jpg","contentUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/07\/robot-body.jpg","width":1125,"height":648},{"@type":"BreadcrumbList","@id":"https:\/\/www.shortform.com\/blog\/responsible-use-of-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.shortform.com\/blog\/"},{"@type":"ListItem","position":2,"name":"The Responsible Use of AI: 3 Ethical Approaches"}]},{"@type":"WebSite","@id":"https:\/\/www.shortform.com\/blog\/#website","url":"https:\/\/www.shortform.com\/blog\/","name":"Shortform Books","description":"The World&#039;s Best Book Summaries","publisher":{"@id":"https:\/\/www.shortform.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.shortform.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.shortform.com\/blog\/#organization","name":"Shortform Books","url":"https:\/\/www.shortform.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png","contentUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png","width":500,"height":74,"caption":"Shortform Books"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/c3e1b539e89423b544ede91ab2bff937","name":"Katie Doll","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/6239731a3fc739640b80be30f2b1727a055d3535d0ee4569e8282faa323e47fc?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/6239731a3fc739640b80be30f2b1727a055d3535d0ee4569e8282faa323e47fc?s=96&d=mm&r=g","caption":"Katie Doll"},"description":"Somehow, Katie was able to pull off her childhood dream of creating a career around books after graduating with a degree in English and a concentration in Creative Writing. Her preferred genre of books has changed drastically over the years, from fantasy\/dystopian young-adult to moving novels and non-fiction books on the human experience. Katie especially enjoys reading and writing about all things television, good and bad.","knowsAbout":["Bachelor of Arts in English With a Concentration in Creative Writing"],"jobTitle":"Senior SEO Writer","worksFor":"Shortform","url":"https:\/\/www.shortform.com\/blog\/author\/katie\/"}]}},"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2021\/07\/robot-body.jpg","_links":{"self":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/115879","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/users\/14"}],"replies":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/comments?post=115879"}],"version-history":[{"count":4,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/115879\/revisions"}],"predecessor-version":[{"id":116103,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/115879\/revisions\/116103"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/media\/41547"}],"wp:attachment":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/media?parent=115879"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/categories?post=115879"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/tags?post=115879"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}