This essay argues that rational agents should align with practices rather than fixed goals. The author challenges the Orthogonality Thesis by suggesting that goal-directedness is not the primary driver of human rationality. This shift toward virtue-ethical agency suggests a new framework for AI alignment that avoids the risks of rigid objective functions.