The essay claims rational AIs should have no goals. It argues human action is guided by aligned practices, not objectives. By framing agency as virtue, the paper sidesteps the Orthogonality trap. Practitioners seeking safer alignment models may find this perspective useful. The work suggests evaluating agents through ethical practices could reduce unintended goal pursuit.