A new essay in The Gradient argues that rational agents should follow practices rather than fixed goals. This approach rejects the orthogonality thesis, suggesting that human rationality stems from aligning actions with evaluative criteria. For AI safety researchers, this shifts the focus from goal-specification to the development of virtue-ethical agency in autonomous systems.