A new essay in The Gradient argues that rational agents should lack fixed goals. The author proposes aligning AI to practices and evaluation criteria rather than final objectives. This shifts the AI alignment focus from goal-specification to the cultivation of virtuous action-dispositions. Practitioners should consider how this framework alters reward function design.