A new essay in The Gradient argues that rational agents should abandon goal-oriented frameworks. Instead, it proposes aligning AI to practices and evaluation criteria rooted in virtue ethics. This approach rejects the orthogonality thesis. Practitioners should consider how behavioral dispositions, rather than fixed objective functions, might solve the alignment problem for autonomous systems.