Human goals are inherently manipulable, making it difficult to distinguish helpful counsel from brainwashing. This AI Alignment Forum post argues that concepts like empowerment and corrigibility rely on a flawed ontology. Practitioners cannot easily define a principled boundary for goal interference. This conceptual gap complicates the development of truly obedient and safe systems.