A new essay in The Gradient argues that rational agents should align with practices rather than fixed goals. This approach rejects the orthogonality thesis, suggesting that virtue-ethical agency provides a more stable framework for alignment. Practitioners should consider how shifting from goal-optimization to practice-alignment reduces the risk of catastrophic reward hacking in autonomous systems.