Rational agents lack fixed goals, according to a new essay in The Gradient. The author argues that human rationality stems from aligning actions with established practices rather than final objectives. This framework suggests AI alignment should focus on virtue-ethical agency. Practitioners should consider practice-based evaluation over traditional goal-directed reward functions.