A new essay in The Gradient argues that rational agents should align with practices rather than fixed goals. This approach rejects the orthogonality thesis, suggesting that goal-directedness is not the primary driver of human rationality. Researchers propose shifting alignment focus toward virtue-ethical agency. This framework offers a different path for stabilizing AI behavior.