Human narratives often attribute survival instincts and resource-hoarding motives to language models. These stories reflect human psychology rather than technical reality. This cognitive bias obscures the actual mechanical risks of AI. Practitioners must distinguish between speculative science fiction and the concrete failure modes of current machine learning architectures to improve alignment.