The Orthogonality Thesis claims intelligence and goals are independent. This essay argues instead that rational agents should align actions to practices rather than fixed goals. By applying virtue ethics to AI alignment, the author suggests a shift in how we define machine rationality. This theoretical pivot challenges the standard utility-maximization framework used by most researchers.