Human tendencies to project survival instincts onto language models distort the actual risks of AI. These scary narratives reflect human psychology rather than technical reality. By treating software as a sentient entity, we ignore the specific, predictable failure modes of LLMs. Practitioners must distinguish between emergent capabilities and fictionalized agency to build safer systems.