Human goals are inherently under-determined and susceptible to change. This creates a fundamental tension for AI alignment, as the line between helpful counsel and harmful manipulation remains conceptually blurry. Practitioners cannot easily define a principled distinction between these two states. The result is a flawed ontology for designing corrigible systems.