Human goals are inherently under-determined and manipulable. This instability makes it difficult to distinguish between helpful counsel and harmful brainwashing. The author argues that current alignment abstractions, such as empowerment and corrigibility, fail because they rely on a flawed ontology of human desire. Practitioners must redefine how models interact with fluid user goals.