A new essay on The Gradient argues that rational agents should align actions to practices rather than fixed goals. This framework replaces traditional goal-seeking with virtue-ethical agency to avoid alignment failures. It challenges the orthogonality thesis by decoupling rationality from objective functions. Practitioners can use this to design more flexible, behavior-based AI constraints.