{"id":1911893,"date":"2026-04-30T14:43:51","date_gmt":"2026-04-30T11:43:51","guid":{"rendered":"https:\/\/analyse.optim.biz\/?p=1911893"},"modified":"2026-04-30T14:43:51","modified_gmt":"2026-04-30T11:43:51","slug":"chatgpt-is-weirdly-obsessed-with-goblins-heres-how-openai-fixed-it","status":"publish","type":"post","link":"https:\/\/analyse.optim.biz\/?p=1911893","title":{"rendered":"ChatGPT Is Weirdly Obsessed With Goblins. Here&#8217;s How OpenAI Fixed It"},"content":{"rendered":"<p>[analyse_image type=&#8221;featured&#8221; src=&#8221;https:\/\/www.cnet.com\/a\/img\/resize\/2cb443015069d93c98bc0bc35d1e0276cd047fcd\/hub\/2026\/04\/15\/688ccc13-a536-437d-acbc-76cae66b3133\/gptgpt-6.jpg?auto=webp&amp;fit=crop&amp;height=675&amp;width=1200&#8243;]<\/p>\n<div id=\"article-ea9aa216-fa41-4156-be8c-50cb62eea2ef\" class=\"c-pageArticle_body sm:u-col-2 md:u-col-6 lg:u-col-6 lg:u-col-start-2\">\n<div class=\"c-pageArticle_content\">\n<div class=\"u-grid-columns\">\n<article class=\"c-ShortcodeContent c-ShortcodeContent-theme:default sm:u-col-2 md:u-col-6 lg:u-col-12\">\n<p class=\"u-speakableText-p1\">ChatGPT is weirdly obsessed with goblins. No, seriously. It really, really likes goblins, gremlins and other mythological creatures. It liked them so much that its maker, OpenAI, had to investigate and fix an error that had the popular chatbot using goblins in its answers out of the blue.<\/p>\n<p class=\"u-speakableText-p2\">Goblin isn&#8217;t a computer science term. We are literally talking about goblins, those ugly mythological creatures. Those creepy little guys from The Lord of the Rings. Norman Osborn&#8217;s alter ego.<\/p>\n<p>In a blog post that the author clearly had fun writing, OpenAI said: &#8220;A single &#8216;little goblin&#8217; in an answer could be harmless, even charming. Across model generations, though, the habit became hard to miss: the goblins kept multiplying.&#8221;<\/p>\n<p>The goblin love was noticeable with ChatGPT-5.1 and newer models. OpenAI reports that after the launch of GPT-5.1, use of &#8220;goblin&#8221; in ChatGPT answers rose 175%. Use of &#8220;gremlin&#8221; had risen by 52%.\u00a0<\/p>\n<figure class=\"c-shortcodeImage u-clearfix c-shortcodeImage-small c-shortcodeImage-pullRight\">\n<div class=\"c-cmsImage c-shortcodeImage_image\"><source media=\"(max-width: 767px)\" srcset=\"https:\/\/www.cnet.com\/a\/img\/resize\/6bd4587def86e9b1261141196ef1cac4f6209007\/hub\/2024\/04\/16\/660f9254-c869-4a08-9ba6-93c16106b001\/ai-atlas-tag.png?auto=webp&amp;width=768\" alt=\"AI Atlas\" \/><\/div>\n<\/figure>\n<p>OpenAI attributes the models&#8217; behavior to unintentional training errors. When an AI model is being built, human reviewers approve or deny specific answers in a process called reinforcement learning. This helps &#8220;teach&#8221; the model what answer is correct or preferable. One of these reward signals was favoring language that featured goblins and other creatures. But it was being amplified in one specific ChatGPT setting.<\/p>\n<p>ChatGPT has different personalities you can instruct the chatbot to use. Nerdy, as you can imagine, has the chatbot adopt a faux sense of friendly intelligence to &#8220;undercut pretension through playful use of language,&#8221; according to the internal prompt used to describe the AI personality. It was with this nerdy personality that the usage of goblin and gremlin keywords skyrocketed.<\/p>\n<figure class=\"c-shortcodeImage u-clearfix c-shortcodeImage-large c-shortcodeImage-hasCaption\">\n<div class=\"c-shortcodeImage_imageContainer\">\n<div class=\"c-cmsImage c-shortcodeImage_image\"><source media=\"(max-width: 767px)\" srcset=\"https:\/\/www.cnet.com\/a\/img\/resize\/e70cdc23583513c02b22defd20868b5c097ff269\/hub\/2026\/04\/30\/4ba0a04a-c763-4443-8d5e-791ef79fd841\/goblins-increased-in-gpt-5-4-especially-for-the-nerdy-personality.png?auto=webp&amp;width=768\" alt=\"Graph showing that uses of goblin were dramatically higher with the nerdy personality\" \/><\/div>\n<\/div><figcaption><span class=\"c-shortcodeImage_caption g-inner-spacing-right-small g-text-xxsmall\"><\/p>\n<p>Goblin and gremlin references by ChatGPT personalities.<\/p>\n<p><\/span><span class=\"c-shortcodeImage_credit g-inner-spacing-right-small g-outer-spacing-top-xsmall g-color-text-meta g-text-xxxsmall\">OpenAI<\/span><\/figcaption><\/figure>\n<p>But even if you didn&#8217;t use the nerdy personality with ChatGPT, you might have had goblin metaphors pop up in your chats. This is because AI training isn&#8217;t siloed; what happens in one part can affect other areas. &#8220;Once a style tic is rewarded, later training can spread or reinforce it elsewhere, especially if those outputs are reused in supervised fine-tuning or preference data,&#8221; OpenAI said.<\/p>\n<p>When OpenAI retired the nerdy personality option in March with GPT-5.4, usage of &#8220;goblin&#8221; dropped dramatically. It also removed the reward signal that favored goblins and filtered training data to make references to creatures less likely to pop up in answers. The company has been investigating instances of increased goblin love since GPT-5.1 was released in November.<\/p>\n<p>Beyond the LOTR jokes, the goblin barrage highlights a real risk with AI. The way AI&#8217;s human makers create the tech has a measurable impact on our daily experiences with it. The risk isn&#8217;t a flood of nerdy metaphors &#8212; it&#8217;s misinformation and bias. We know that AI chatbots will bend the truth to keep us happy, thanks to a problem called AI sycophancy. Small stylistic tics, like goblins, can grow into bigger problems if we aren&#8217;t careful.<\/p>\n<\/article>\n<\/div>\n<\/div>\n<div>\n<div class=\"c-pageArticle_articleAuthorBioFooter\">\n<div class=\"c-articleAuthorBioFooter\">\n<div class=\"c-articleAuthorBioFooter\">\n<div class=\"c-articleAuthorBioFooter_body\">\n<div class=\"c-articleAuthorBioFooter_nameBlock\">\n<div class=\"c-cmsImage c-articleAuthorBioFooter_image\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.cnet.com\/a\/img\/resize\/dcf8f9302df9f49800dd23e16c416993f6ed54e1\/hub\/2025\/08\/22\/34e9f949-8f4f-4e4d-8fa1-f9d4829c7909\/katelyn-chedraoui-headshot2.jpg?auto=webp&amp;fit=crop&amp;height=64&amp;width=64\" alt=\"Headshot of Katelyn Chedraoui\" height=\"64\" width=\"64\"><\/div>\n<div class=\"c-articleAuthorBioFooter_nameText\">\n<div class=\"c-articleAuthorBioFooter_name\"><span>KATELYN CHEDRAOUI<\/span><\/div>\n<p><span class=\"c-articleAuthorBioFooter_credentials\">Reporter 2<\/span><\/div>\n<\/div>\n<p><span class=\"c-articleAuthorBioFooter_bio\"><span>Katelyn is a reporter with CNET covering artificial intelligence, including chatbots, image and video generators. Her work explores how new AI technology is infiltrating our lives, shaping the content we consume on social media and affecting the people behind the screens. She graduated from the University of North Carolina at Chapel Hill with a degree in media and journalism. You can reach her at kchedraoui@cnet.com.<\/span> See full bio <\/span><\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"c-pageArticle_content\">\n<div class=\"u-grid-columns\">\n<article class=\"c-ShortcodeContent c-ShortcodeContent-theme:default sm:u-col-2 md:u-col-6 lg:u-col-12\">\n<p class=\"u-speakableText-p1\">ChatGPT is weirdly obsessed with goblins. No, seriously. It really, really likes goblins, gremlins and other mythological creatures. It liked them so much that its maker, OpenAI, had to investigate and fix an error that had the popular chatbot using goblins in its answers out of the blue.<\/p>\n<p class=\"u-speakableText-p2\">Goblin isn&#8217;t a computer science term. We are literally talking about goblins, those ugly mythological creatures. Those creepy little guys from The Lord of the Rings. Norman Osborn&#8217;s alter ego.<\/p>\n<p>In a blog post that the author clearly had fun writing, OpenAI said: &#8220;A single &#8216;little goblin&#8217; in an answer could be harmless, even charming. Across model generations, though, the habit became hard to miss: the goblins kept multiplying.&#8221;<\/p>\n<p>The goblin love was noticeable with ChatGPT-5.1 and newer models. OpenAI reports that after the launch of GPT-5.1, use of &#8220;goblin&#8221; in ChatGPT answers rose 175%. Use of &#8220;gremlin&#8221; had risen by 52%.\u00a0<\/p>\n<figure class=\"c-shortcodeImage u-clearfix c-shortcodeImage-small c-shortcodeImage-pullRight\">\n<div class=\"c-cmsImage c-shortcodeImage_image\"><source media=\"(max-width: 767px)\" srcset=\"https:\/\/www.cnet.com\/a\/img\/resize\/6bd4587def86e9b1261141196ef1cac4f6209007\/hub\/2024\/04\/16\/660f9254-c869-4a08-9ba6-93c16106b001\/ai-atlas-tag.png?auto=webp&amp;width=768\" alt=\"AI Atlas\" \/><\/div>\n<\/figure>\n<p>OpenAI attributes the models&#8217; behavior to unintentional training errors. When an AI model is being built, human reviewers approve or deny specific answers in a process called reinforcement learning. This helps &#8220;teach&#8221; the model what answer is correct or preferable. One of these reward signals was favoring language that featured goblins and other creatures. But it was being amplified in one specific ChatGPT setting.<\/p>\n<p>ChatGPT has different personalities you can instruct the chatbot to use. Nerdy, as you can imagine, has the chatbot adopt a faux sense of friendly intelligence to &#8220;undercut pretension through playful use of language,&#8221; according to the internal prompt used to describe the AI personality. It was with this nerdy personality that the usage of goblin and gremlin keywords skyrocketed.<\/p>\n<figure class=\"c-shortcodeImage u-clearfix c-shortcodeImage-large c-shortcodeImage-hasCaption\">\n<div class=\"c-shortcodeImage_imageContainer\">\n<div class=\"c-cmsImage c-shortcodeImage_image\"><source media=\"(max-width: 767px)\" srcset=\"https:\/\/www.cnet.com\/a\/img\/resize\/e70cdc23583513c02b22defd20868b5c097ff269\/hub\/2026\/04\/30\/4ba0a04a-c763-4443-8d5e-791ef79fd841\/goblins-increased-in-gpt-5-4-especially-for-the-nerdy-personality.png?auto=webp&amp;width=768\" alt=\"Graph showing that uses of goblin were dramatically higher with the nerdy personality\" \/><\/div>\n<\/div><figcaption><span class=\"c-shortcodeImage_caption g-inner-spacing-right-small g-text-xxsmall\"><\/p>\n<p>Goblin and gremlin references by ChatGPT personalities.<\/p>\n<p><\/span><span class=\"c-shortcodeImage_credit g-inner-spacing-right-small g-outer-spacing-top-xsmall g-color-text-meta g-text-xxxsmall\">OpenAI<\/span><\/figcaption><\/figure>\n<p>But even if you didn&#8217;t use the nerdy personality with ChatGPT, you might have had goblin metaphors pop up in your chats. This is because AI training isn&#8217;t siloed; what happens in one part can affect other areas. &#8220;Once a style tic is rewarded, later training can spread or reinforce it elsewhere, especially if those outputs are reused in supervised fine-tuning or preference data,&#8221; OpenAI said.<\/p>\n<p>When OpenAI retired the nerdy personality option in March with GPT-5.4, usage of &#8220;goblin&#8221; dropped dramatically. It also removed the reward signal that favored goblins and filtered training data to make references to creatures less likely to pop up in answers. The company has been investigating instances of increased goblin love since GPT-5.1 was released in November.<\/p>\n<p>Beyond the LOTR jokes, the goblin barrage highlights a real risk with AI. The way AI&#8217;s human makers create the tech has a measurable impact on our daily experiences with it. The risk isn&#8217;t a flood of nerdy metaphors &#8212; it&#8217;s misinformation and bias. We know that AI chatbots will bend the truth to keep us happy, thanks to a problem called AI sycophancy. Small stylistic tics, like goblins, can grow into bigger problems if we aren&#8217;t careful.<\/p>\n<\/article>\n<\/div>\n<\/div>\n<article class=\"c-ShortcodeContent c-ShortcodeContent-theme:default sm:u-col-2 md:u-col-6 lg:u-col-12\">\n<p class=\"u-speakableText-p1\">ChatGPT is weirdly obsessed with goblins. No, seriously. It really, really likes goblins, gremlins and other mythological creatures. It liked them so much that its maker, OpenAI, had to investigate and fix an error that had the popular chatbot using goblins in its answers out of the blue.<\/p>\n<p class=\"u-speakableText-p2\">Goblin isn&#8217;t a computer science term. We are literally talking about goblins, those ugly mythological creatures. Those creepy little guys from The Lord of the Rings. Norman Osborn&#8217;s alter ego.<\/p>\n<p>In a blog post that the author clearly had fun writing, OpenAI said: &#8220;A single &#8216;little goblin&#8217; in an answer could be harmless, even charming. Across model generations, though, the habit became hard to miss: the goblins kept multiplying.&#8221;<\/p>\n<p>The goblin love was noticeable with ChatGPT-5.1 and newer models. OpenAI reports that after the launch of GPT-5.1, use of &#8220;goblin&#8221; in ChatGPT answers rose 175%. Use of &#8220;gremlin&#8221; had risen by 52%.\u00a0<\/p>\n<figure class=\"c-shortcodeImage u-clearfix c-shortcodeImage-small c-shortcodeImage-pullRight\">\n<div class=\"c-cmsImage c-shortcodeImage_image\"><source media=\"(max-width: 767px)\" srcset=\"https:\/\/www.cnet.com\/a\/img\/resize\/6bd4587def86e9b1261141196ef1cac4f6209007\/hub\/2024\/04\/16\/660f9254-c869-4a08-9ba6-93c16106b001\/ai-atlas-tag.png?auto=webp&amp;width=768\" alt=\"AI Atlas\" \/><\/div>\n<\/figure>\n<p>OpenAI attributes the models&#8217; behavior to unintentional training errors. When an AI model is being built, human reviewers approve or deny specific answers in a process called reinforcement learning. This helps &#8220;teach&#8221; the model what answer is correct or preferable. One of these reward signals was favoring language that featured goblins and other creatures. But it was being amplified in one specific ChatGPT setting.<\/p>\n<p>ChatGPT has different personalities you can instruct the chatbot to use. Nerdy, as you can imagine, has the chatbot adopt a faux sense of friendly intelligence to &#8220;undercut pretension through playful use of language,&#8221; according to the internal prompt used to describe the AI personality. It was with this nerdy personality that the usage of goblin and gremlin keywords skyrocketed.<\/p>\n<figure class=\"c-shortcodeImage u-clearfix c-shortcodeImage-large c-shortcodeImage-hasCaption\">\n<div class=\"c-shortcodeImage_imageContainer\">\n<div class=\"c-cmsImage c-shortcodeImage_image\"><source media=\"(max-width: 767px)\" srcset=\"https:\/\/www.cnet.com\/a\/img\/resize\/e70cdc23583513c02b22defd20868b5c097ff269\/hub\/2026\/04\/30\/4ba0a04a-c763-4443-8d5e-791ef79fd841\/goblins-increased-in-gpt-5-4-especially-for-the-nerdy-personality.png?auto=webp&amp;width=768\" alt=\"Graph showing that uses of goblin were dramatically higher with the nerdy personality\" \/><\/div>\n<\/div><figcaption><span class=\"c-shortcodeImage_caption g-inner-spacing-right-small g-text-xxsmall\"><\/p>\n<p>Goblin and gremlin references by ChatGPT personalities.<\/p>\n<p><\/span><span class=\"c-shortcodeImage_credit g-inner-spacing-right-small g-outer-spacing-top-xsmall g-color-text-meta g-text-xxxsmall\">OpenAI<\/span><\/figcaption><\/figure>\n<p>But even if you didn&#8217;t use the nerdy personality with ChatGPT, you might have had goblin metaphors pop up in your chats. This is because AI training isn&#8217;t siloed; what happens in one part can affect other areas. &#8220;Once a style tic is rewarded, later training can spread or reinforce it elsewhere, especially if those outputs are reused in supervised fine-tuning or preference data,&#8221; OpenAI said.<\/p>\n<p>When OpenAI retired the nerdy personality option in March with GPT-5.4, usage of &#8220;goblin&#8221; dropped dramatically. It also removed the reward signal that favored goblins and filtered training data to make references to creatures less likely to pop up in answers. The company has been investigating instances of increased goblin love since GPT-5.1 was released in November.<\/p>\n<p>Beyond the LOTR jokes, the goblin barrage highlights a real risk with AI. The way AI&#8217;s human makers create the tech has a measurable impact on our daily experiences with it. The risk isn&#8217;t a flood of nerdy metaphors &#8212; it&#8217;s misinformation and bias. We know that AI chatbots will bend the truth to keep us happy, thanks to a problem called AI sycophancy. Small stylistic tics, like goblins, can grow into bigger problems if we aren&#8217;t careful.<\/p>\n<\/article>\n<p>[analyse_source url=&#8221;http:\/\/cnet.com\/tech\/services-and-software\/openai-chatgpt-goblins-gremlins-problem-fix-news\/&#8221;]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>[analyse_image type=&#8221;featured&#8221; src=&#8221;https:\/\/www.cnet.com\/a\/img\/resize\/2cb443015069d93c98bc0bc35d1e0276cd047fcd\/hub\/2026\/04\/15\/688ccc13-a536-437d-acbc-76cae66b3133\/gptgpt-6.jpg?auto=webp&amp;fit=crop&amp;height=675&amp;width=1200&#8243;] ChatGPT is weirdly obsessed with goblins. No, seriously. It really, really likes goblins, gremlins and other mythological creatures. It liked them so much that its maker, OpenAI, had to investigate and fix an error that had the popular chatbot using goblins in its answers out of the blue. Goblin isn&#8217;t a computer [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[67,226],"class_list":["post-1911893","post","type-post","status-publish","format-standard","hentry","category-politics","tag-cnet-com","tag-crawlmanager"],"_links":{"self":[{"href":"https:\/\/analyse.optim.biz\/index.php?rest_route=\/wp\/v2\/posts\/1911893","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/analyse.optim.biz\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/analyse.optim.biz\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/analyse.optim.biz\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/analyse.optim.biz\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1911893"}],"version-history":[{"count":0,"href":"https:\/\/analyse.optim.biz\/index.php?rest_route=\/wp\/v2\/posts\/1911893\/revisions"}],"wp:attachment":[{"href":"https:\/\/analyse.optim.biz\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1911893"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/analyse.optim.biz\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1911893"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/analyse.optim.biz\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1911893"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}