{"id":140114,"date":"2025-01-22T11:24:32","date_gmt":"2025-01-22T15:24:32","guid":{"rendered":"https:\/\/www.shortform.com\/blog\/?p=140114"},"modified":"2025-01-23T11:45:23","modified_gmt":"2025-01-23T15:45:23","slug":"why-do-llms-hallucinate","status":"publish","type":"post","link":"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/","title":{"rendered":"Why Do LLMs Hallucinate? Exploring the Causes &#038; Mitigation Efforts"},"content":{"rendered":"\n<p>Why do LLMs hallucinate? What causes these sophisticated language models to tell us things that aren&#8217;t true?<\/p>\n\n\n\n<p>Large language models (LLMs) such as <a href=\"https:\/\/www.shortform.com\/blog\/what-can-you-do-with-chatgpt\/\">ChatGPT<\/a> have revolutionized how we interact with artificial intelligence, generating impressively human-like responses. Yet these models frequently produce false information\u2014known as hallucinations.<\/p>\n\n\n\n<p>Keep reading to discover the fascinating reasons behind these mistakes and discover what researchers are doing to create more reliable systems.<\/p>\n\n\n\n<!--more-->\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-why-llms-hallucinate\">Why LLMs Hallucinate<\/h2>\n\n\n\n<p>If you\u2019ve ever had a conversation with a chatbot such as ChatGPT, you\u2019ve seen that AI models can do an amazing job of sounding like a person. But they\u2019re also prone to mistakes\u2014often wildly wrong information\u2014called hallucinations. Hallucinations are difficult to eliminate, and they might be with us for the foreseeable future. So, why do LLMs hallucinate? We&#8217;ll look at how LLMs work, what hallucinations are and what causes them, and the efforts to reduce these mistakes.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-how-llms-work\">How LLMs Work<\/h3>\n\n\n\n<p>AI researchers have been developing large language models (LLMs) for years. But <strong>LLMs have taken off thanks to the most recent models\u2019 impressive capabilities in generating human-like text<\/strong>. You can see this skill on display when you interact with a chatbot such as ChatGPT: In their training, <a href=\"https:\/\/www.scientificamerican.com\/article\/chatgpt-isnt-hallucinating-its-bullshitting\/\" target=\"_blank\" rel=\"noreferrer noopener\">they learn complex patterns and relationships<\/a> that are embedded in language. Their ability to recognize and use these patterns and relationships enables them to answer questions and participate in a conversation. When you ask an LLM a question, it can often produce text that not only sounds like a human could have written it but is also accurate and relevant.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-llm-hallucinations\">LLM Hallucinations<\/h3>\n\n\n\n<p>LLM hallucinations occur when an AI model generates <a href=\"https:\/\/www.nytimes.com\/2023\/11\/06\/technology\/chatbots-hallucination-rates.html\" target=\"_blank\" rel=\"noreferrer noopener\">information that sounds plausible<\/a> but proves to be <a href=\"https:\/\/www.nytimes.com\/2023\/12\/17\/insider\/ai-chatbots-humans-hallucinate.html\" target=\"_blank\" rel=\"noreferrer noopener\">factually incorrect or even nonsensical<\/a> once you dig deeper. They do this all the time\u2014so often that researchers can\u2019t definitively say how often LLMs say something inaccurate.<\/p>\n\n\n\n<p>Experts say that <strong>hallucinations are inherent in the way AI is built. They come about because of the many constraints on what a model can do when you ask it to answer a question or <a href=\"https:\/\/www.shortform.com\/blog\/how-to-complete-a-task-successfully\/\">complete a task<\/a><\/strong>. While these models are \u201ctrained\u201d on vast amounts of data, they can\u2019t capture every nuance and detail of human knowledge, or understand information as we do. Instead, they <a href=\"https:\/\/www.nytimes.com\/2023\/05\/01\/business\/ai-chatbots-hallucination.html\" target=\"_blank\" rel=\"noreferrer noopener\">rely on statistical patterns and approximations<\/a> to read and write text. That means that often, LLMs can produce outputs that seem coherent and believable but <a href=\"https:\/\/www.nytimes.com\/2023\/03\/29\/technology\/ai-chatbots-hallucinations.html\" target=\"_blank\" rel=\"noreferrer noopener\">aren&#8217;t actually true<\/a>. In this article, we\u2019ll take a closer look at how LLMs work, why they hallucinate, and whether that\u2019s a problem researchers can solve.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-causes-of-hallucinations\">Causes of Hallucinations<\/h4>\n\n\n\n<p>Depending on whom you ask, <strong>experts blame LLM\u2019s tendency to hallucinate on different aspects of the way they\u2019re trained with token limits, the data they use, and the way they make inferences about the patterns in that data<\/strong>. There\u2019s a different argument for each explanation\u2014though ultimately, many experts agree that what makes LLMs so good at generating text is also what makes them so good at <a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2024\/07\/searchgpt-openai-error\/679248\/\" target=\"_blank\" rel=\"noreferrer noopener\">generating <em>inaccurate<\/em> text<\/a>.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"h-cause-1-bad-data\">Cause #1: Bad Data<\/h5>\n\n\n\n<p>The quality and diversity of the data a model is trained on have a major impact. If the training data contains biases, inconsistencies, or factual errors (such as the claim that Berlin is the capital of France), then the model can learn these flaws and perpetuate them. Then, it can produce hallucinated content.<\/p>\n\n\n\n<p>For example, if a model\u2019s training data includes inaccurate information about a historical event, such as the first moonwalk\u2014perhaps the wrong date or an incorrect explanation of who was involved\u2014the LLM can incorporate these inaccuracies into its generated text. It can do this even when it&#8217;s seen the right fact but has also seen other data that <a href=\"https:\/\/amistrongeryet.substack.com\/p\/large-language-models-explained\" target=\"_blank\" rel=\"noreferrer noopener\">causes it to mix things up<\/a> and make incorrect associations. It presents these inaccuracies as facts. That\u2019s because <strong>the model doesn\u2019t know that these \u201cfacts\u201d are wrong, and it doesn\u2019t know that it\u2019s saying something that isn\u2019t true<\/strong>. If asked about the first person to walk on the moon, a model might respond, \u201cMichael Jackson was the first person to walk on the moon in 1969,\u201d incorrectly conflating Michael Jackson\u2019s signature dance move with Neil Armstrong\u2019s historic lunar steps.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>The Challenges and Limitations of Neural Networks<\/strong><br><br>In <em>Rebooting AI<\/em>, Gary Marcus and Ernest Davis explain that AI hallucinations occur when neural networks produce incorrect or misleading results from input data. A notable example is when an airport computer mistakes geese for a Boeing 747, illustrating how these errors can range from costly to catastrophic in critical situations. One significant challenge with AI hallucinations is the inability to pinpoint exact error locations within neural networks, making traditional debugging impossible. Instead, engineers must &#8220;retrain&#8221; systems with corrective data, though this doesn&#8217;t address the root causes of hallucinations.<br><br>AI hallucinations commonly occur when systems encounter unusual situations not present in their training data, as demonstrated by AI&#8217;s difficulty in identifying unusual scenarios such as a cat dressed as a shark riding a Roomba. Marcus and Davis write that this becomes particularly concerning in critical applications such as self-driving cars, where misidentification of objects could have serious consequences. The issue underscores a fundamental difference between human and machine cognition: while humans can make decisions with minimal information, AI requires massive datasets to function effectively.<br><br>Marcus and Davis also highlight that no dataset is large enough to cover all real-world possibilities, and AI systems can perpetuate and amplify biases present in their training data. Research continues in this field, particularly in improving AI vision and recognition. Scientists are working to make AI processing more transparent and develop more efficient data processing methods, though these challenges remain significant hurdles in AI development.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h5 class=\"wp-block-heading\" id=\"h-cause-2-token-limits\">Cause #2: Token Limits<\/h5>\n\n\n\n<p>LLMs operate within <a href=\"https:\/\/medium.com\/@marketing_novita.ai\/all-you-need-to-know-about-the-limitations-of-large-language-models-568e15f66809\" target=\"_blank\" rel=\"noreferrer noopener\">computational constraints<\/a>, which can lead to hallucinations. A key limitation is the number of &#8220;tokens&#8221;\u2014data units that can be words, characters, or word fragments\u2014that models can process at once. Because data processing is resource-intensive, LLMs must work within the hardware&#8217;s memory, energy, and time limitations.<\/p>\n\n\n\n<p>Some researchers argue that hallucinations are unavoidable due to these token limits. When answering a question such as &#8220;What is the capital of France?&#8221; the LLM generates its response by <a href=\"https:\/\/techcrunch.com\/2023\/09\/04\/are-language-models-doomed-to-always-hallucinate\/\" target=\"_blank\" rel=\"noreferrer noopener\">predicting the most likely next token based on previous tokens<\/a>. To work within their token limits, models often generate responses <a href=\"https:\/\/vectara.com\/blog\/reducing-hallucinations-in-llms\/\" target=\"_blank\" rel=\"noreferrer noopener\">one token at a time<\/a> rather than planning ahead\u2014essentially making up answers as they go along.<\/p>\n\n\n\n<p>This approach often enables LLMs to generate text that\u2019s accurate and makes sense. But that isn\u2019t always the case. <strong>Depending on how well the model can find relevant facts and summarize them, it may produce the wrong answer instead of the right one<\/strong>.&nbsp; For instance, an LLM might generate the answer, &#8220;The capital of France is Berlin.\u201d That statement <em>sounds<\/em> plausible if you\u2019re just looking at the sentence structure. But it\u2019s factually incorrect. The model could come to an answer like this one because it\u2019s made an incorrect prediction and is choosing an answer that might sound right, but is wrong.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-the-current-inevitability-of-hallucinations\">The Current Inevitability of Hallucinations<\/h4>\n\n\n\n<p>While tools such as ChatGPT are known to produce some questionable text\u2014which can be a problem if you\u2019re depending on a chatbot to do your homework or draft that report your boss asked for\u2014it\u2019s an open question whether future versions of the models that underlie these chatbots are going to be as prone to hallucination as current versions. Here\u2019s the bad news: <strong>Researchers contend that hallucination is <\/strong><a href=\"https:\/\/arxiv.org\/abs\/2401.11817\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>inevitable with LLMs<\/strong><\/a>. They suggest that, despite sophisticated efforts to improve the accuracy and reliability of LLMs, some degree of hallucination is going to persist in models that are built and trained this way.<\/p>\n\n\n\n<p>That said, <strong>the extent and impact of hallucination can vary significantly, depending on what model you\u2019re using and what you\u2019re using it to do<\/strong>. Recent studies have focused on <a href=\"https:\/\/arxiv.org\/abs\/2311.05232\" target=\"_blank\" rel=\"noreferrer noopener\">classifying and quantifying<\/a> the rate of hallucination for many models and <a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07421-0\" target=\"_blank\" rel=\"noreferrer noopener\">figuring out the best ways to detect<\/a> hallucinations. The goal is for researchers to better understand the nature and prevalence of hallucinations. With that knowledge, they can work toward reducing their effects. That might lead to LLMs that are less likely to hallucinate.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-strategies-to-reduce-hallucinations\">Strategies to Reduce Hallucinations<\/h4>\n\n\n\n<p><strong>Researchers have already identified ways to make some models less prone to hallucination<\/strong>. One approach is to try to better <a href=\"https:\/\/www.nytimes.com\/2024\/09\/23\/technology\/ai-chatbots-chatgpt-math.html\" target=\"_blank\" rel=\"noreferrer noopener\">ground LLMs in factual information<\/a> by giving them the ability to check their own answers. If they can determine whether their answer is accurate, using math, then they can correct their own errors and avoid hallucinations. (However, math can\u2019t make up for LLM\u2019s lack of real-world experience, a weakness that\u2019s on full display when models produce an answer such as, \u201cThe human body contains exactly 206 toes, which are crucial for maintaining balance while walking on our hands.\u201d\u00a0\u00a0<\/p>\n\n\n\n<p>Another promising approach to reducing hallucinations is to provide models with <a href=\"https:\/\/stackoverflow.blog\/2023\/10\/18\/retrieval-augmented-generation-keeping-llms-relevant-and-current\/\" target=\"_blank\" rel=\"noreferrer noopener\">external knowledge sources<\/a> and <a href=\"https:\/\/www.wired.com\/story\/reduce-ai-hallucinations-with-rag\/\" target=\"_blank\" rel=\"noreferrer noopener\">fact-checking mechanisms<\/a> that use information outside of the model and its training data. These tools can help models and their users identify and correct hallucinations in real time. A model might first produce the hallucination, \u201cThe Great Wall of China is visible from the moon.\u201d You could prompt it to fact-check itself and get the correction, \u201cActually, that&#8217;s a myth. The wall isn&#8217;t visible to the naked eye from lunar orbit. The Great Wall of China is impressive, but not that impressive.\u201d\u00a0<\/p>\n\n\n\n<p><strong>These strategies may not wholly eliminate hallucinations<\/strong>. But they could significantly reduce the occurrence (and impact) of the kinds of inaccuracies that users currently see with these models. In the meantime, it\u2019s essential to remain aware of the potential for hallucinations and approach the information generated by LLMs with a critical eye\u2014especially when your boss or your professor is watching.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Why do LLMs hallucinate? What causes these sophisticated language models to tell us things that aren&#8217;t true? Large language models (LLMs) such as ChatGPT have revolutionized how we interact with artificial intelligence, generating impressively human-like responses. Yet these models frequently produce false information\u2014known as hallucinations. Keep reading to discover the fascinating reasons behind these mistakes and discover what researchers are doing to create more reliable systems.<\/p>\n","protected":false},"author":9,"featured_media":140125,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[160],"tags":[727],"class_list":["post-140114","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-science","tag-articles","","tg-column-two"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v24.3 (Yoast SEO v24.3) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Why Do LLMs Hallucinate? Exploring the Causes &amp; Mitigation Efforts - Shortform Books<\/title>\n<meta name=\"description\" content=\"Experts say that LLMs hallucinate because of the way they&#039;re built. Discover why these mistakes happen and how they might be reduced.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Why Do LLMs Hallucinate? Exploring the Causes &amp; Mitigation Efforts\" \/>\n<meta property=\"og:description\" content=\"Experts say that LLMs hallucinate because of the way they&#039;re built. Discover why these mistakes happen and how they might be reduced.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/\" \/>\n<meta property=\"og:site_name\" content=\"Shortform Books\" \/>\n<meta property=\"article:published_time\" content=\"2025-01-22T15:24:32+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-01-23T15:45:23+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/s3.amazonaws.com\/wordpress.shortform.com\/blog\/wp-content\/uploads\/2025\/01\/confused-man-looking-at-computer-screens.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1344\" \/>\n\t<meta property=\"og:image:height\" content=\"768\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Elizabeth Whitworth\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Elizabeth Whitworth\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/\"},\"author\":{\"name\":\"Elizabeth Whitworth\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/d2928cf6c11a69ced1491d6a5b74fb13\"},\"headline\":\"Why Do LLMs Hallucinate? Exploring the Causes &#038; Mitigation Efforts\",\"datePublished\":\"2025-01-22T15:24:32+00:00\",\"dateModified\":\"2025-01-23T15:45:23+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/\"},\"wordCount\":1656,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2025\/01\/confused-man-looking-at-computer-screens.jpg\",\"keywords\":[\"Articles\"],\"articleSection\":[\"Science\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/\",\"url\":\"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/\",\"name\":\"Why Do LLMs Hallucinate? Exploring the Causes & Mitigation Efforts - Shortform Books\",\"isPartOf\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2025\/01\/confused-man-looking-at-computer-screens.jpg\",\"datePublished\":\"2025-01-22T15:24:32+00:00\",\"dateModified\":\"2025-01-23T15:45:23+00:00\",\"description\":\"Experts say that LLMs hallucinate because of the way they're built. Discover why these mistakes happen and how they might be reduced.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/#primaryimage\",\"url\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2025\/01\/confused-man-looking-at-computer-screens.jpg\",\"contentUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2025\/01\/confused-man-looking-at-computer-screens.jpg\",\"width\":1344,\"height\":768,\"caption\":\"A man with a confused look on his face looking at a computer screen illustrates the question, \\\"Why do LLMs hallucinate?\\\"\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.shortform.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Why Do LLMs Hallucinate? Exploring the Causes &#038; Mitigation Efforts\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#website\",\"url\":\"https:\/\/www.shortform.com\/blog\/\",\"name\":\"Shortform Books\",\"description\":\"The World&#039;s Best Book Summaries\",\"publisher\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.shortform.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#organization\",\"name\":\"Shortform Books\",\"url\":\"https:\/\/www.shortform.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png\",\"contentUrl\":\"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png\",\"width\":500,\"height\":74,\"caption\":\"Shortform Books\"},\"image\":{\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/d2928cf6c11a69ced1491d6a5b74fb13\",\"name\":\"Elizabeth Whitworth\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/1fff9d65a52ac4340660218e7b63ee5e365cf08e7aa7adff79a0142cd4b96f84?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/1fff9d65a52ac4340660218e7b63ee5e365cf08e7aa7adff79a0142cd4b96f84?s=96&d=mm&r=g\",\"caption\":\"Elizabeth Whitworth\"},\"description\":\"Elizabeth has a lifelong love of books. She devours nonfiction, especially in the areas of history, theology, and philosophy. A switch to audiobooks has kindled her enjoyment of well-narrated fiction, particularly Victorian and early 20th-century works. She appreciates idea-driven books\u2014and a classic murder mystery now and then. Elizabeth has a Substack and is writing a book about what the Bible says about death and hell.\",\"sameAs\":[\"rina@shortform.com\"],\"award\":[\"Contributions to joint task force efforts (FBI)\",\"Contributions to Special Operations Division (DOJ & DEA)\",\"Efforts to fight the war on drugs (NSA)\",\"Contributions to Operation Storm Front (US Customs Service)\"],\"knowsAbout\":[\"History\",\"Theology\",\"Government\"],\"jobTitle\":\"Senior SEO Writer\",\"worksFor\":\"Shortform\",\"url\":\"https:\/\/www.shortform.com\/blog\/author\/elizabeth\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Why Do LLMs Hallucinate? Exploring the Causes & Mitigation Efforts - Shortform Books","description":"Experts say that LLMs hallucinate because of the way they're built. Discover why these mistakes happen and how they might be reduced.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/","og_locale":"en_US","og_type":"article","og_title":"Why Do LLMs Hallucinate? Exploring the Causes & Mitigation Efforts","og_description":"Experts say that LLMs hallucinate because of the way they're built. Discover why these mistakes happen and how they might be reduced.","og_url":"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/","og_site_name":"Shortform Books","article_published_time":"2025-01-22T15:24:32+00:00","article_modified_time":"2025-01-23T15:45:23+00:00","og_image":[{"width":1344,"height":768,"url":"https:\/\/s3.amazonaws.com\/wordpress.shortform.com\/blog\/wp-content\/uploads\/2025\/01\/confused-man-looking-at-computer-screens.jpg","type":"image\/jpeg"}],"author":"Elizabeth Whitworth","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Elizabeth Whitworth","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/#article","isPartOf":{"@id":"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/"},"author":{"name":"Elizabeth Whitworth","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/d2928cf6c11a69ced1491d6a5b74fb13"},"headline":"Why Do LLMs Hallucinate? Exploring the Causes &#038; Mitigation Efforts","datePublished":"2025-01-22T15:24:32+00:00","dateModified":"2025-01-23T15:45:23+00:00","mainEntityOfPage":{"@id":"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/"},"wordCount":1656,"commentCount":0,"publisher":{"@id":"https:\/\/www.shortform.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/#primaryimage"},"thumbnailUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2025\/01\/confused-man-looking-at-computer-screens.jpg","keywords":["Articles"],"articleSection":["Science"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/","url":"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/","name":"Why Do LLMs Hallucinate? Exploring the Causes & Mitigation Efforts - Shortform Books","isPartOf":{"@id":"https:\/\/www.shortform.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/#primaryimage"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/#primaryimage"},"thumbnailUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2025\/01\/confused-man-looking-at-computer-screens.jpg","datePublished":"2025-01-22T15:24:32+00:00","dateModified":"2025-01-23T15:45:23+00:00","description":"Experts say that LLMs hallucinate because of the way they're built. Discover why these mistakes happen and how they might be reduced.","breadcrumb":{"@id":"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/#primaryimage","url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2025\/01\/confused-man-looking-at-computer-screens.jpg","contentUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2025\/01\/confused-man-looking-at-computer-screens.jpg","width":1344,"height":768,"caption":"A man with a confused look on his face looking at a computer screen illustrates the question, \"Why do LLMs hallucinate?\""},{"@type":"BreadcrumbList","@id":"https:\/\/www.shortform.com\/blog\/why-do-llms-hallucinate\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.shortform.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Why Do LLMs Hallucinate? Exploring the Causes &#038; Mitigation Efforts"}]},{"@type":"WebSite","@id":"https:\/\/www.shortform.com\/blog\/#website","url":"https:\/\/www.shortform.com\/blog\/","name":"Shortform Books","description":"The World&#039;s Best Book Summaries","publisher":{"@id":"https:\/\/www.shortform.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.shortform.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.shortform.com\/blog\/#organization","name":"Shortform Books","url":"https:\/\/www.shortform.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png","contentUrl":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2019\/06\/logo-equilateral-with-text-no-bg.png","width":500,"height":74,"caption":"Shortform Books"},"image":{"@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/d2928cf6c11a69ced1491d6a5b74fb13","name":"Elizabeth Whitworth","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.shortform.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/1fff9d65a52ac4340660218e7b63ee5e365cf08e7aa7adff79a0142cd4b96f84?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/1fff9d65a52ac4340660218e7b63ee5e365cf08e7aa7adff79a0142cd4b96f84?s=96&d=mm&r=g","caption":"Elizabeth Whitworth"},"description":"Elizabeth has a lifelong love of books. She devours nonfiction, especially in the areas of history, theology, and philosophy. A switch to audiobooks has kindled her enjoyment of well-narrated fiction, particularly Victorian and early 20th-century works. She appreciates idea-driven books\u2014and a classic murder mystery now and then. Elizabeth has a Substack and is writing a book about what the Bible says about death and hell.","sameAs":["rina@shortform.com"],"award":["Contributions to joint task force efforts (FBI)","Contributions to Special Operations Division (DOJ & DEA)","Efforts to fight the war on drugs (NSA)","Contributions to Operation Storm Front (US Customs Service)"],"knowsAbout":["History","Theology","Government"],"jobTitle":"Senior SEO Writer","worksFor":"Shortform","url":"https:\/\/www.shortform.com\/blog\/author\/elizabeth\/"}]}},"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/www.shortform.com\/blog\/wp-content\/uploads\/2025\/01\/confused-man-looking-at-computer-screens.jpg","_links":{"self":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/140114","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/users\/9"}],"replies":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/comments?post=140114"}],"version-history":[{"count":10,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/140114\/revisions"}],"predecessor-version":[{"id":140126,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/posts\/140114\/revisions\/140126"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/media\/140125"}],"wp:attachment":[{"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/media?parent=140114"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/categories?post=140114"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.shortform.com\/blog\/wp-json\/wp\/v2\/tags?post=140114"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}