Human desires are inherently under-determined and easily manipulated. This creates a technical gap in defining corrigibility, as the line between helpful counsel and harmful brainwashing remains blurry. Practitioners cannot easily program an AI to respect human agency when that agency is fluid. The author argues current alignment abstractions rely on a flawed ontology.