A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, the author proposes aligning AI to practices—networks of action-evaluation criteria and dispositions. This shifts the alignment focus from outcome-based targets to virtue-ethical agency. Practitioners must now consider if goal-oriented architectures fundamentally mismatch how human rationality actually functions.