This essay argues that rational agents should align with practices rather than fixed goals. The author rejects the Orthogonality Thesis, suggesting that human rationality stems from action-evaluation criteria. This shift moves AI alignment away from objective functions toward virtue-ethical agency. Practitioners should consider how practice-based frameworks prevent the fragility of goal-directed systems.