A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, it proposes aligning AI to practices—networks of action-evaluation criteria—rather than static objectives. This shifts the AI alignment focus from goal-directed behavior to virtue-ethical agency. Practitioners must now consider if objective functions are fundamentally flawed.