Rational humans align actions to practices rather than final goals. This essay argues AI alignment should shift from goal-directed optimization toward virtue-ethical agency. It challenges the Orthogonality Thesis by suggesting that rational agents should lack fixed objectives. Practitioners must reconsider how they define agentic behavior to avoid catastrophic goal misalignment.