Rational agents lack fixed goals, according to a new essay in The Gradient. The author argues that human rationality stems from aligning actions with established practices rather than final targets. This framework suggests AI alignment should prioritize virtue-ethical agency over goal-directed optimization. Practitioners can use this to rethink how models evaluate success.