A new essay in The Gradient argues that rational agents should abandon goal-oriented frameworks. Instead, it proposes aligning AI to practices—networks of action-dispositions and evaluation criteria. This shifts the focus from final outcomes to behavioral virtues. Practitioners should consider this as an alternative to traditional objective functions in alignment research.