The Gradient essay argues that rational agents should align with practices rather than fixed goals. It rejects the orthogonality thesis, suggesting human rationality stems from action-evaluation criteria. This framework proposes a shift in AI alignment from objective-based optimization to virtue-ethical agency. Practitioners should consider behavioral dispositions over static reward functions.