A new essay in The Gradient argues that rational agents should abandon fixed goals in favor of practices. The author suggests aligning AI to networks of action-dispositions rather than final objectives. This approach challenges the orthogonality thesis. Practitioners should consider how virtue-ethical frameworks might prevent goal-collapse in autonomous systems.