{"id":128547,"date":"2024-08-16T15:07:00","date_gmt":"2024-08-16T19:07:00","guid":{"rendered":"https:\/\/www.shortform.com\/blog\/?p=128547"},"modified":"2024-08-19T15:46:10","modified_gmt":"2024-08-19T19:46:10","slug":"why-does-ai-hallucinate","status":"publish","type":"post","link":"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/","title":{"rendered":"Why Does AI Hallucinate? How Machines Are Easily Fooled"},"content":{"rendered":"\n<p>What&#8217;s an AI hallucination? Why does AI hallucinate? What are the consequences of these errors?<\/p>\n\n\n\n<p>In their book <em>Rebooting AI<\/em>, Gary Marcus and Ernest Davis explore the phenomenon of AI hallucinations. They explain why these errors occur and discuss the potential risks in critical situations such as airport control towers or self-driving cars.<\/p>\n\n\n\n<p>Keep reading to learn about the challenges of debugging artificial neural networks.<\/p>\n\n\n\n<!--more-->\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-why-ai-hallucinates\">Why AI Hallucinates<\/h2>\n\n\n\n<p>Suppose an airport computer trained to identify approaching aircraft mistakes a flight of geese for a Boeing 747. In AI development, this kind of mismatch error is referred to as a \u201challucination,\u201d and under the wrong circumstances\u2014such as in an airport control tower\u2014the disruptions an AI hallucination might cause range from costly to catastrophic.<\/p>\n\n\n\n<p>Why does AI hallucinate? Davis and Marcus explain that neural networks are trained using large amounts of data. When this strategy is employed <em>to the exclusion of every other programming tool<\/em>, it\u2019s hard to correct for a system\u2019s <a href=\"https:\/\/www.shortform.com\/blog\/maturity-continuum-7-habits\/\">dependence<\/a> on statistical correlation instead of logic and reason. Because of this, neural networks can\u2019t be debugged in the way that human-written software can, and they\u2019re easily fooled when presented with data that don\u2019t match what they\u2019re trained on.<\/p>\n\n\n\n<p>When neural networks are solely trained on input data rather than programmed by hand, it\u2019s impossible to say exactly <em>why <\/em>the system produces a particular result from any given input. Marcus and Davis write that, <strong>when AI hallucinations occur, it\u2019s impossible to identify where the errors take place<\/strong> in the maze of computations inside a neural network. This makes traditional debugging impossible, so software engineers have to \u201cretrain\u201d that specific error out of the system, such as by giving the airport computer\u2019s AI thousands of photos of birds in flight that are clearly labeled as \u201cnot airplanes.\u201d Davis and Marcus argue that this solution does nothing to fix the systemic issues that cause hallucinations in the first place.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>Hallucinations and Large Language Models<\/strong><br><br>Since <em>Rebooting AI<\/em>\u2019s publication, AI hallucinations have become more widely known thanks to <a href=\"https:\/\/openai.com\/blog\/chatgpt\" target=\"_blank\" rel=\"noreferrer noopener\">the public launch of ChatGPT<\/a> and similar data-driven \u201c<a href=\"https:\/\/www.ibm.com\/topics\/chatbots\" target=\"_blank\" rel=\"noreferrer noopener\">chatbots<\/a>.\u201d These AIs, known as <a href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/large-language-models\/\" target=\"_blank\" rel=\"noreferrer noopener\">Large Language Models<\/a> (LLMs), use large amounts of human-written content to generate original text by <a href=\"https:\/\/www.wired.com\/story\/how-chatgpt-works-large-language-model\/\" target=\"_blank\" rel=\"noreferrer noopener\">calculating which words and phrases are statistically most likely to follow other words and phrases<\/a>. In a sense, <a href=\"https:\/\/www.pinecone.io\/learn\/llm-ecosystem\/\" target=\"_blank\" rel=\"noreferrer noopener\">LLMs are similar to the autocomplete feature<\/a> on texting apps and word processors, with more versatility and on a grander scale. Unfortunately, they\u2019re prone to hallucinations such as <a href=\"https:\/\/masterofcode.com\/blog\/hallucinations-in-llms-what-you-need-to-know-before-integration\" target=\"_blank\" rel=\"noreferrer noopener\">self-contradictions, falsehoods, and random nonsense<\/a>.<br><br>Beyond manually fact-checking everything an LLM writes, there are several methods that use AI to catch and minimize its own hallucinations. One of these is <a href=\"https:\/\/platform.openai.com\/docs\/guides\/prompt-engineering\/six-strategies-for-getting-better-results\" target=\"_blank\" rel=\"noreferrer noopener\">prompt engineering<\/a>, in which you guide the LLM\u2019s output by giving it clear and specific instructions; breaking long projects into smaller, simpler tasks; and providing the AI with factual information and sources you want it to reference. Other approaches include <a href=\"https:\/\/learnprompting.org\/docs\/intermediate\/chain_of_thought\" target=\"_blank\" rel=\"noreferrer noopener\">chain of thought prompting<\/a>, in which you ask the LLM to explain its reasoning, and <a href=\"https:\/\/www.promptingguide.ai\/techniques\/fewshot\" target=\"_blank\" rel=\"noreferrer noopener\">few-shot prompting<\/a>, in which you provide the LLM with examples of how you\u2019d like its output to be structured.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-hallucinations-and-big-data\">Hallucinations and Big Data<\/h3>\n\n\n\n<p>AI hallucinations aren\u2019t hard to produce, as anyone who\u2019s used <a href=\"https:\/\/www.shortform.com\/blog\/what-can-you-do-with-chatgpt\/\">ChatGPT<\/a> can attest. In many cases, <strong>AIs hallucinate when presented with information in an unusual context<\/strong> that\u2019s not similar to what\u2019s included in the system\u2019s training data. Consider the popular YouTube video of <a href=\"https:\/\/youtu.be\/tLt5rBfNucc?si=tpZIqVoMokz9Wq7R\" target=\"_blank\" rel=\"noreferrer noopener\">a cat dressed as a shark riding a Roomba<\/a>. No matter that the image is bizarre, a human has no difficulty identifying what they\u2019re looking at, whereas an AI tasked with the same assignment would offer a completely wrong answer. Davis and Marcus argue that this matters when pattern recognition is used in critical situations, such as in self-driving cars. If the AI scanning the road ahead sees an unusual object in its path, the system could hallucinate with disastrous results.\u00a0<\/p>\n\n\n\n<p>(Shortform note: In the field of image recognition, especially in terms of fine-tuning AI \u201cvision\u201d so that it can correctly identify objects, the research is ongoing. Since, as Davis and Marcus point out, the process by which neural networks process any given piece of information is opaque, some researchers are working to determine <a href=\"https:\/\/www.brown.edu\/news\/2023-06-28\/computer-vision\" target=\"_blank\" rel=\"noreferrer noopener\">exactly <em>how <\/em>neural networks interpret visual data<\/a> and how that process differs from the way humans see. <a href=\"https:\/\/www.adeak.com\/what-causes-image-recognition-errors-in-computer-vision\/\" target=\"_blank\" rel=\"noreferrer noopener\">Possible methods to improve computer vision<\/a> include making AI\u2019s inner workings more transparent, including artificial models with real-world inputs, and developing ways to process images using less data than modern AIs require.)<\/p>\n\n\n\n<p>Hallucinations illustrate a difference between human and machine cognition\u2014we can make decisions based on minimal information, whereas machine learning requires huge datasets to function. Marcus and Davis point out that, if AI is to interact with the real world\u2019s infinite variables and possibilities,<strong> there isn\u2019t a big enough dataset in existence to train an AI for every situation.<\/strong> Since AIs don\u2019t understand what their data mean, only how those data correlate, AIs will perpetuate and amplify human biases that are buried in their input information. There\u2019s a further danger that AI will magnify its own hallucinations as erroneous computer-generated information becomes part of the global set of data used to train future AI.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>Bad Data In, Bad Data Out<\/strong><br><br>The issue that Marcus and Davis bring up about AIs propagating other AIs\u2019 hallucinations is commonly known as <a href=\"https:\/\/www.shortform.com\/app\/article\/why-ai-is-in-danger-of-cannibalizing-itself-shortform-explainers-shortform\" target=\"_blank\" rel=\"noreferrer noopener\">model collapse<\/a>. This danger grows as more AI-generated content enters the world\u2019s marketplace of ideas, and therefore the data pool that other AIs draw from. This danger isn\u2019t theoretical either\u2014media outlets that rely on AI for content <a href=\"https:\/\/futurism.com\/microsoft-pumping-internet-full-garbage-ai-news\" target=\"_blank\" rel=\"noreferrer noopener\">have already been caught generating bogus news<\/a>, which then enters the data content stream feeding other AI content creators.<br><br>The problem with relying on purely <em>human <\/em>content isn\u2019t only that much of it contains human bias, but also that bias is hard to recognize. In <a href=\"https:\/\/www.shortform.com\/app\/book\/biased\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Biased<\/em><\/a>, psychologist Jennifer Eberhardt explains that <a href=\"https:\/\/www.shortform.com\/app\/book\/biased\/chapters-1-2\" target=\"_blank\" rel=\"noreferrer noopener\">most forms of prejudice are unconscious<\/a>\u2014if we\u2019re not aware of their existence in ourselves, how can we possibly hope to prevent them from filtering into our digital creations? To make things more difficult, <a href=\"https:\/\/www.shortform.com\/app\/book\/biased\/chapters-1-2#categorization\" target=\"_blank\" rel=\"noreferrer noopener\">bias is, at its core, a form of classification<\/a>, which Davis and Marcus freely point out is what <a href=\"https:\/\/www.shortform.com\/blog\/what-is-narrow-ai\/\">narrow AI<\/a> is getting really good at doing. Parroting and projecting human bias is therefore second nature to data-driven AI.<br><br>One solution might be to develop AI that can classify data and draw conclusions with little or no training data examples, cutting out human bias and other AIs\u2019 hallucinations altogether. Known as \u201c<a href=\"https:\/\/www.technologyreview.com\/2020\/10\/16\/1010566\/ai-machine-learning-with-tiny-data\/\" target=\"_blank\" rel=\"noreferrer noopener\">less than one<\/a>\u201d or \u201c<a href=\"https:\/\/huggingface.co\/tasks\/zero-shot-classification\" target=\"_blank\" rel=\"noreferrer noopener\">zero-shot learning<\/a>,\u201d techniques are currently being developed to teach AI to categorize images and text that aren\u2019t present <em>anywhere <\/em>in its training data. As of early 2024, <a href=\"https:\/\/arxiv.org\/html\/2401.17766v1\" target=\"_blank\" rel=\"noreferrer noopener\">the zero-shot approach is still under development<\/a> but is showing the potential to eliminate some of the problems inherent in big data <a href=\"https:\/\/www.shortform.com\/blog\/ai-talent\/\">AI training<\/a> techniques.<\/td><\/tr><\/tbody><\/table><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>What&#8217;s an AI hallucination? Why does AI hallucinate? What are the consequences of these errors? In their book Rebooting AI, Gary Marcus and Ernest Davis explore the phenomenon of AI hallucinations. They explain why these errors occur and discuss the potential risks in critical situations such as airport control towers or self-driving cars. Keep reading to learn about the challenges of debugging artificial neural networks.<\/p>\n","protected":false},"author":9,"featured_media":128556,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[20,160],"tags":[1570],"class_list":["post-128547","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","category-science","tag-rebooting-ai","","tg-column-two"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v24.3 (Yoast SEO v24.3) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Why Does AI Hallucinate? How Machines Are Easily Fooled - Shortform Books<\/title>\n<meta name=\"description\" content=\"Why does AI hallucinate? Learn why these errors occur, and understand the potential risks in critical situations.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Why Does AI Hallucinate? How Machines Are Easily Fooled\" \/>\n<meta property=\"og:description\" content=\"Why does AI hallucinate? Learn why these errors occur, and understand the potential risks in critical situations.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/\" \/>\n<meta property=\"og:site_name\" content=\"Shortform Books\" \/>\n<meta property=\"article:published_time\" content=\"2024-08-16T19:07:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-08-19T19:46:10+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/s3.amazonaws.com\/wordpress.shortform.com\/blog\/wp-content\/uploads\/2024\/08\/sf_hannah_a_photorealistic_wide-angle_image_of_a_giraffe_in_the_a915ca22-c182-41b2-9fa8-26710057149b.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1015\" \/>\n\t<meta property=\"og:image:height\" content=\"569\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Elizabeth Whitworth\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Elizabeth Whitworth\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/\"},\"author\":{\"name\":\"Elizabeth Whitworth\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/d2928cf6c11a69ced1491d6a5b74fb13\"},\"headline\":\"Why Does AI Hallucinate? How Machines Are Easily Fooled\",\"datePublished\":\"2024-08-16T19:07:00+00:00\",\"dateModified\":\"2024-08-19T19:46:10+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/\"},\"wordCount\":1169,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/08\/sf_hannah_a_photorealistic_wide-angle_image_of_a_giraffe_in_the_a915ca22-c182-41b2-9fa8-26710057149b.webp\",\"keywords\":[\"Rebooting AI\"],\"articleSection\":[\"Ethics\",\"Science\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/\",\"url\":\"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/\",\"name\":\"Why Does AI Hallucinate? How Machines Are Easily Fooled - Shortform Books\",\"isPartOf\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/08\/sf_hannah_a_photorealistic_wide-angle_image_of_a_giraffe_in_the_a915ca22-c182-41b2-9fa8-26710057149b.webp\",\"datePublished\":\"2024-08-16T19:07:00+00:00\",\"dateModified\":\"2024-08-19T19:46:10+00:00\",\"description\":\"Why does AI hallucinate? Learn why these errors occur, and understand the potential risks in critical situations.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/#primaryimage\",\"url\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/08\/sf_hannah_a_photorealistic_wide-angle_image_of_a_giraffe_in_the_a915ca22-c182-41b2-9fa8-26710057149b.webp\",\"contentUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/08\/sf_hannah_a_photorealistic_wide-angle_image_of_a_giraffe_in_the_a915ca22-c182-41b2-9fa8-26710057149b.webp\",\"width\":1015,\"height\":569,\"caption\":\"a giraffe wearing a top hat and a bow tie raises the question Why does AI hallucinate?\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.shortform.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Why Does AI Hallucinate? How Machines Are Easily Fooled\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#website\",\"url\":\"https:\/\/www.shortform.com\/blog\/\",\"name\":\"Shortform Books\",\"description\":\"The World&#039;s Best Book Summaries\",\"publisher\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.shortform.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\",\"name\":\"Shortform Books\",\"url\":\"https:\/\/www.shortform.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png\",\"contentUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png\",\"width\":500,\"height\":74,\"caption\":\"Shortform Books\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/d2928cf6c11a69ced1491d6a5b74fb13\",\"name\":\"Elizabeth Whitworth\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/1fff9d65a52ac4340660218e7b63ee5e365cf08e7aa7adff79a0142cd4b96f84?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/1fff9d65a52ac4340660218e7b63ee5e365cf08e7aa7adff79a0142cd4b96f84?s=96&d=mm&r=g\",\"caption\":\"Elizabeth Whitworth\"},\"description\":\"Elizabeth has a lifelong love of books. She devours nonfiction, especially in the areas of history, theology, and philosophy. A switch to audiobooks has kindled her enjoyment of well-narrated fiction, particularly Victorian and early 20th-century works. She appreciates idea-driven books\u2014and a classic murder mystery now and then. Elizabeth has a Substack and is writing a book about what the Bible says about death and hell.\",\"sameAs\":[\"rina@shortform.com\"],\"award\":[\"Contributions to joint task force efforts (FBI)\",\"Contributions to Special Operations Division (DOJ & DEA)\",\"Efforts to fight the war on drugs (NSA)\",\"Contributions to Operation Storm Front (US Customs Service)\"],\"knowsAbout\":[\"History\",\"Theology\",\"Government\"],\"jobTitle\":\"Senior SEO Writer\",\"worksFor\":\"Shortform\",\"url\":\"https:\/\/www.shortform.com\/blog\/author\/elizabeth\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Why Does AI Hallucinate? How Machines Are Easily Fooled - Shortform Books","description":"Why does AI hallucinate? Learn why these errors occur, and understand the potential risks in critical situations.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/","og_locale":"en_US","og_type":"article","og_title":"Why Does AI Hallucinate? How Machines Are Easily Fooled","og_description":"Why does AI hallucinate? Learn why these errors occur, and understand the potential risks in critical situations.","og_url":"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/","og_site_name":"Shortform Books","article_published_time":"2024-08-16T19:07:00+00:00","article_modified_time":"2024-08-19T19:46:10+00:00","og_image":[{"width":1015,"height":569,"url":"https:\/\/s3.amazonaws.com\/wordpress.shortform.com\/blog\/wp-content\/uploads\/2024\/08\/sf_hannah_a_photorealistic_wide-angle_image_of_a_giraffe_in_the_a915ca22-c182-41b2-9fa8-26710057149b.webp","type":"image\/webp"}],"author":"Elizabeth Whitworth","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Elizabeth Whitworth","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/#article","isPartOf":{"@id":"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/"},"author":{"name":"Elizabeth Whitworth","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/d2928cf6c11a69ced1491d6a5b74fb13"},"headline":"Why Does AI Hallucinate? How Machines Are Easily Fooled","datePublished":"2024-08-16T19:07:00+00:00","dateModified":"2024-08-19T19:46:10+00:00","mainEntityOfPage":{"@id":"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/"},"wordCount":1169,"commentCount":0,"publisher":{"@id":"https:\/\/www.shortform.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/#primaryimage"},"thumbnailUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/08\/sf_hannah_a_photorealistic_wide-angle_image_of_a_giraffe_in_the_a915ca22-c182-41b2-9fa8-26710057149b.webp","keywords":["Rebooting AI"],"articleSection":["Ethics","Science"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/","url":"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/","name":"Why Does AI Hallucinate? How Machines Are Easily Fooled - Shortform Books","isPartOf":{"@id":"https:\/\/www.shortform.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/#primaryimage"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/#primaryimage"},"thumbnailUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/08\/sf_hannah_a_photorealistic_wide-angle_image_of_a_giraffe_in_the_a915ca22-c182-41b2-9fa8-26710057149b.webp","datePublished":"2024-08-16T19:07:00+00:00","dateModified":"2024-08-19T19:46:10+00:00","description":"Why does AI hallucinate? Learn why these errors occur, and understand the potential risks in critical situations.","breadcrumb":{"@id":"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/#primaryimage","url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/08\/sf_hannah_a_photorealistic_wide-angle_image_of_a_giraffe_in_the_a915ca22-c182-41b2-9fa8-26710057149b.webp","contentUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/08\/sf_hannah_a_photorealistic_wide-angle_image_of_a_giraffe_in_the_a915ca22-c182-41b2-9fa8-26710057149b.webp","width":1015,"height":569,"caption":"a giraffe wearing a top hat and a bow tie raises the question Why does AI hallucinate?"},{"@type":"BreadcrumbList","@id":"https:\/\/www.shortform.com\/blog\/why-does-ai-hallucinate\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.shortform.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Why Does AI Hallucinate? How Machines Are Easily Fooled"}]},{"@type":"WebSite","@id":"https:\/\/www.shortform.com\/blog\/#website","url":"https:\/\/www.shortform.com\/blog\/","name":"Shortform Books","description":"The World&#039;s Best Book Summaries","publisher":{"@id":"https:\/\/www.shortform.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.shortform.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.shortform.com\/blog\/#organization","name":"Shortform Books","url":"https:\/\/www.shortform.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png","contentUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png","width":500,"height":74,"caption":"Shortform Books"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/d2928cf6c11a69ced1491d6a5b74fb13","name":"Elizabeth Whitworth","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/1fff9d65a52ac4340660218e7b63ee5e365cf08e7aa7adff79a0142cd4b96f84?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/1fff9d65a52ac4340660218e7b63ee5e365cf08e7aa7adff79a0142cd4b96f84?s=96&d=mm&r=g","caption":"Elizabeth Whitworth"},"description":"Elizabeth has a lifelong love of books. She devours nonfiction, especially in the areas of history, theology, and philosophy. A switch to audiobooks has kindled her enjoyment of well-narrated fiction, particularly Victorian and early 20th-century works. She appreciates idea-driven books\u2014and a classic murder mystery now and then. Elizabeth has a Substack and is writing a book about what the Bible says about death and hell.","sameAs":["rina@shortform.com"],"award":["Contributions to joint task force efforts (FBI)","Contributions to Special Operations Division (DOJ & DEA)","Efforts to fight the war on drugs (NSA)","Contributions to Operation Storm Front (US Customs Service)"],"knowsAbout":["History","Theology","Government"],"jobTitle":"Senior SEO Writer","worksFor":"Shortform","url":"https:\/\/www.shortform.com\/blog\/author\/elizabeth\/"}]}},"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2024\/08\/sf_hannah_a_photorealistic_wide-angle_image_of_a_giraffe_in_the_a915ca22-c182-41b2-9fa8-26710057149b.webp","_links":{"self":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/128547","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/users\/9"}],"replies":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/comments?post=128547"}],"version-history":[{"count":9,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/128547\/revisions"}],"predecessor-version":[{"id":128557,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/128547\/revisions\/128557"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/media\/128556"}],"wp:attachment":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/media?parent=128547"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/categories?post=128547"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/tags?post=128547"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}