A new essay in The Gradient argues that rational agents should align with practices rather than fixed goals. This approach rejects the orthogonality thesis, suggesting that virtue-ethical agency prevents the rigid pursuit of harmful objectives. Practitioners should consider how practice-based frameworks might replace traditional reward functions to improve long-term AI alignment.