A new essay in The Gradient argues that rational agents should align with practices rather than fixed goals. This approach replaces traditional goal-directed alignment with a virtue-ethical framework. It suggests that AI systems should emulate human action-dispositions. Practitioners must now consider if behavioral networks offer a more stable safety path than objective functions.