Rational agents lack fixed goals, according to a new essay in The Gradient. The author argues that human rationality stems from aligning actions with practices rather than final objectives. This perspective suggests that AI alignment should pivot from goal-specification to virtue-ethical agency. Practitioners must reconsider how they define objective functions for autonomous systems.