A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, the author proposes aligning AI to practices and evaluation criteria rooted in virtue ethics. This shifts the focus from goal-directed behavior to action-dispositions. Practitioners must now consider if behavioral patterns offer a more stable alignment path than objective functions.