A new essay in The Gradient argues that rational AIs should lack fixed goals entirely. The author suggests aligning AI actions to practices and evaluation criteria rather than final objectives. This approach challenges the orthogonality thesis. It offers practitioners a framework to move beyond traditional goal-based alignment toward a virtue-ethical agency model.