A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, the author proposes aligning AI to practices and evaluation criteria rooted in virtue ethics. This shifts the alignment focus from goal-directed behavior to habitual excellence. Practitioners should consider this as an alternative to traditional objective-function optimization in AI safety.