A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, the author proposes aligning AI to practices—networks of action-dispositions and evaluation criteria. This shifts the alignment focus from goal-directed behavior to virtue-ethical agency. Practitioners should consider how this framework prevents the rigid goal-pursuit failures common in current LLMs.