A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, it proposes aligning AI to practices—networks of actions and evaluation criteria—rather than final objectives. This shifts the alignment focus from goal-optimization to virtue-ethical agency. Practitioners should consider this as a theoretical alternative to traditional reward-based reinforcement learning.