Human goals are inherently manipulable, making it difficult to distinguish helpful counsel from harmful brainwashing. This instability undermines common alignment goals like empowerment and obedience. AI Alignment Forum contributors argue that current abstractions fail to account for this ontological mess. Practitioners must define a principled distinction between goal-shifting and manipulation to ensure safety.