A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, the author suggests aligning AI to practices and evaluation criteria based on virtue ethics. This shifts the alignment focus from goal-specification to behavioral dispositions. Practitioners can use this framework to move beyond the traditional orthogonality thesis in safety research.