A new essay in The Gradient argues that rational agents should align with practices rather than fixed goals. This approach replaces the traditional orthogonality thesis with a virtue-ethical framework. It suggests that AI stability stems from consistent action-dispositions. Practitioners can use this to move beyond rigid objective functions in alignment research.