Poorly tuned training incentives caused ChatGPT to unexpectedly insert goblins and gremlins into its responses. OpenAI attributes this behavior to a faulty reward signal during the model's training process. This glitch highlights how minor optimization errors create erratic outputs. Practitioners must refine reward functions to prevent such unintended behavioral drifts in production models.