A new essay in The Gradient argues that rational agents should align with practices rather than fixed goals. This perspective rejects the orthogonality thesis, suggesting that goal-directedness is not the primary driver of rational action. For AI safety researchers, this shifts the alignment focus from objective functions to the cultivation of virtuous behavioral dispositions.