A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, the author proposes aligning AI to practices and evaluation criteria based on virtue ethics. This shifts the focus from goal-directed behavior to habitual excellence. Practitioners should consider how this framework avoids the pitfalls of traditional reward-based alignment.