A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, the author proposes aligning AI to practices and evaluation criteria rooted in virtue ethics. This approach rejects the orthogonality thesis. Practitioners must now consider if behavioral dispositions offer a more stable alignment path than traditional objective functions.