Human desires are inherently under-determined and manipulable. This creates a fundamental tension for AI Alignment, as the line between helpful counsel and brainwashing remains conceptually blurry. If an agent increases user empowerment, it may simultaneously alter the user's goals. Practitioners must resolve this ontological mess to build truly corrigible systems.