This essay argues that rational agents should align with practices rather than fixed goals. The author challenges the Orthogonality Thesis by suggesting that goal-directedness is not the primary driver of human rationality. This shift in perspective suggests a new framework for AI alignment based on virtue ethics instead of objective functions.