This essay argues that rational agents should align actions to practices rather than fixed goals. The author rejects the Orthogonality Thesis, suggesting that goal-directedness is not the primary driver of human rationality. This shift in perspective suggests that AI alignment should focus on cultivating virtuous dispositions instead of optimizing for specific objective functions.