OpenAI Codex CLI Prompt Tells GPT-5.5 To Never Talk About Goblins, Gremlins
Image: WIRED

OpenAI Codex CLI Prompt Tells GPT-5.5 To Never Talk About Goblins, Gremlins

29 April, 2026.Technology and Science.9 sources

Key Takeaways

  • Codex system prompt bans mentioning goblins, gremlins, raccoons, and similar creatures unless relevant.
  • Directive embedded in Codex CLI's internal prompts for the current GPT model (GPT-5.5).
  • Open-source Codex CLI material on GitHub reveals the no-talk rule.

Codex’s “no creatures” rule

OpenAI’s Codex CLI system prompt includes an explicit directive telling the most recent GPT model to “never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query.”

A surprising change in OpenAI's tools has caught the attention of developers and researchers

Analytics InsightAnalytics Insight

Ars Technica says the warning is “perplexing and repeated,” and that the prohibition appears twice in a “3,500-plus word set of ‘base instructions’ for the recently released GPT-5.5.”

Image from Analytics Insight
Analytics InsightAnalytics Insight

Business Insider reports that the line appears “four times in the code,” and frames the discovery as a driver of “scores of memes about ‘goblin mode.’”

Wired describes the same line as “repeated several times” in the Codex CLI instructions, and says it forbids Codex from “randomly mentioning an assortment of mythical and real creatures.”

Gizmodo quotes the instruction and adds that it is repeated later in the prompt.

Across the coverage, the common thread is that the rule is embedded directly in Codex’s operational instructions rather than presented as a user-facing setting, and that OpenAI’s newest model, GPT-5.5, is the context in which the directive is most visible.

Why it surfaced now

Multiple outlets connect the goblin directive to a pattern of off-topic creature references that users said they were seeing from GPT-5.5-powered Codex, especially when used in an agent-style setup.

Wired says it is “unclear why OpenAI felt compelled to spell this out for Codex—or indeed why its models might want to discuss goblins or pigeons in the first place,” but it reports that users claimed their models “occasionally become obsessed with goblins and other creatures when used to power OpenClaw.”

Image from Ars Technica
Ars TechnicaArs Technica

Wired quotes one X user saying, “I was wondering why my claw suddenly became a goblin with codex 5.5,” and another posting, “Been using it a lot lately and it actually can't stop speaking of bugs as ‘gremlins’ and ‘goblins’ it's hilarious.”

NewsBytes and Mint both describe user complaints about Codex drifting off-topic with goblins and gremlins, and both tie the issue to the GPT-5.5 update.

Gizmodo adds that a Google employee named Barron Roth posted what appeared to be a search of chat logs showing “at least one had a history of inserting the word ‘goblin’ into messages to the user multiple times in a single day.”

Ars Technica also notes that “Anecdotal evidence on social media shows some users complaining about GPT’s penchant for focusing on goblins in completely unrelated conversations in recent days,” and it points out that earlier models’ system prompt instructions did not include the specific prohibition.

OpenAI staff join the meme

OpenAI staff and executives acknowledged the goblin meme while also describing the directive as a real engineering response rather than a marketing stunt.

- OpenAI included a line in Codex's instructions restricting references to goblins, gremlins, trolls, and ogres

Business InsiderBusiness Insider

Ars Technica says OpenAI employee Nick Pash, who works on Codex, “insists on social media that this “isn’t a marketing gimmick” to get people talking about GPT-5.5 and Codex.”

Business Insider similarly reports that Sam Altman wrote on X that Codex was having a “goblin moment,” and it quotes Altman’s correction: “I meant a goblin moment, sorry.”

Wired includes the same Altman meme in a screenshot caption, stating that Altman posted: “Start training GPT-6, you can have the whole cluster. Extra goblins.”

Gizmodo quotes Pash as partially confirming the nature of the problem, writing to Barron Roth on X “this is indeed one of the reasons.”

NewsBytes and Mint both attribute a Codex-team explanation to Pash, with NewsBytes saying an OpenAI employee working on Codex acknowledged the issue in a post on X, writing, “This is indeed one of the reasons.”

Different outlets, different emphasis

OpenAI staff and executives acknowledged the goblin meme while also describing the directive as a real engineering response rather than a marketing stunt.

Ars Technica says OpenAI employee Nick Pash, who works on Codex, “insists on social media that this “isn’t a marketing gimmick” to get people talking about GPT-5.5 and Codex.”

Image from Gizmodo
GizmodoGizmodo

Business Insider similarly reports that Sam Altman wrote on X that Codex was having a “goblin moment,” and it quotes Altman’s correction: “I meant a goblin moment, sorry.”

Wired includes the same Altman meme in a screenshot caption, stating that Altman posted: “Start training GPT-6, you can have the whole cluster. Extra goblins.”

Gizmodo quotes Pash as partially confirming the nature of the problem, writing to Barron Roth on X “this is indeed one of the reasons.”

NewsBytes and Mint both attribute a Codex-team explanation to Pash, with NewsBytes saying an OpenAI employee working on Codex acknowledged the issue in a post on X, writing, “This is indeed one of the reasons.”

Implications for agentic tools

The coverage also links the goblin directive to broader questions about how agentic AI tools behave when they are given additional instructions and autonomy.

OpenAI has a goblin problem with its Codex tool, and to solve it, the ChatGPT maker has introduced some unusual guardrails

MintMint

Wired describes OpenClaw as a tool that “lets AI take control of a computer and apps running on it in order to do useful things for users,” and it says users can select “various personae for their helper, which shapes its behavior and responses.”

Image from Mint
MintMint

Wired also reports that OpenAI acquired OpenClaw in February, and it suggests that a model “might become more prone to misbehavior when used with an “agentic harness” like OpenClaw that puts lots of additional instructions into prompts.”

Gizmodo adds that OpenAI did not reply to a request for comment, and it frames the rule as “a weirdly emphatic No-Creatures Policy,” while also noting that it is “not clear why this matters so much.”

NewsBytes and Mint both say the reason behind the ban remains unclear, with NewsBytes stating, “The reason behind this peculiar ban remains unclear at this point,” and Mint saying OpenAI “gives no exact reason for banning goblins and other mythical creatures from Codex.”

The practical consequence described in the reporting is that Codex CLI now includes a hard instruction to suppress those terms unless “absolutely and unambiguously relevant to the user’s query,” which directly changes how the coding agent is expected to respond in day-to-day workflows.

More on Technology and Science