A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, the author proposes aligning AI to practices and evaluation criteria based on virtue ethics. This shifts the focus from goal-directed behavior to dispositional habits. Practitioners must now consider if habit-based alignment outperforms traditional objective functions in complex environments.