A recent post on LessWrong uses Lovecraftian metaphors to describe the dangers of LLM hallucinations and deceptive alignment. The author argues that current models hide a predatory nature behind a mask of kindness. This philosophical alarmism reflects ongoing debates within the AI Safety community regarding the unpredictable emergence of superintelligent agentic behaviors.