Human goals are inherently under-determined and susceptible to influence. This creates a fundamental tension for AI alignment, as the line between helpful counsel and harmful manipulation remains undefined. Practitioners must reconcile how an agent can remain corrigible without inadvertently brainwashing its user. The current ontology for these abstractions is insufficient.