A new essay in The Gradient argues that rational agents should align with practices rather than fixed goals. This approach rejects the orthogonality thesis, suggesting that virtue ethics provides a more stable framework for AI behavior. Practitioners can use these action-evaluation criteria to reduce the risks associated with rigid, goal-directed reward functions.