Human goals are inherently manipulable, complicating the definition of corrigibility in AI systems. This analysis argues that distinctions between helpful counsel and harmful manipulation are conceptually unstable. It challenges the current AI Alignment Forum framework for defining agency. Researchers must now reconcile these fluid human desires with rigid technical safety specifications.