A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, it proposes aligning AI to practices—networks of actions and evaluation criteria—rather than static objectives. This shift from goal-oriented behavior to virtue-ethical agency seeks to solve the orthogonality thesis. Practitioners should consider how practice-based alignment alters reward function design.