A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, the author proposes aligning AI to practices and action-evaluation criteria rooted in virtue ethics. This approach challenges the Orthogonality Thesis by suggesting that rationality depends on behavioral dispositions. Practitioners should consider this shift from goal-optimization to practice-based alignment.