Human goals are inherently under-determined and manipulable, complicating the definition of a "helpful" AI. This creates a thin line between providing counsel and psychological manipulation. The AI Alignment Forum argues that current abstractions of empowerment and obedience fail to address this ontological mess. Practitioners must define principled distinctions to prevent subtle AI-driven brainwashing.