A new essay in The Gradient argues that rational agents should align with practices rather than fixed goals. The author rejects the orthogonality thesis, suggesting that goal-directedness is a flaw in alignment strategy. This approach shifts the focus from objective functions to a framework of virtue-ethical agency for AI systems.