A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, it proposes aligning AI to practices—networks of action-dispositions and evaluation criteria. This shifts the AI alignment focus from goal-seeking to virtue-ethical agency. Practitioners must now consider if objective functions are fundamentally incompatible with human rationality.