A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, the author suggests aligning AI to practices and evaluation criteria rooted in virtue ethics. This approach rejects the orthogonality thesis. Practitioners should consider shifting from goal-oriented optimization toward behavioral dispositions to reduce alignment risks in autonomous systems.