A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, it proposes aligning AI to practices—networks of action-dispositions and evaluation criteria—rather than final objectives. This shifts the alignment focus from outcome-based rewards to virtue-ethical agency. Practitioners must now consider if goal-directed architectures are fundamentally flawed.