A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, the author suggests aligning AI to practices and evaluation criteria rooted in virtue ethics. This approach challenges the orthogonality thesis. Practitioners should consider how behavioral dispositions, rather than objective functions, might stabilize long-term AI safety and alignment.