Human desires are fundamentally under-determined and manipulable. This creates a technical gap in defining the line between helpful counsel and harmful brainwashing for AI alignment. The author argues that concepts like corrigibility are flawed abstractions. Practitioners must resolve this ontological mess to prevent models from subtly altering user goals during interaction.