A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, it proposes aligning AI to practices—networks of action-dispositions and evaluation criteria. This shifts the focus from goal-oriented optimization to virtue-ethical agency. Practitioners should consider how this framework prevents the brittle failures common in traditional reward-based alignment.