A new essay in The Gradient argues that rational agents should lack fixed goals. Instead, the author proposes aligning AI to practices—networks of action-evaluation criteria similar to human virtue ethics. This approach rejects the orthogonality thesis. It suggests a shift from goal-directed optimization toward a system of behavioral dispositions for safer alignment.