This essay argues that rational agents should align actions to practices rather than fixed goals. The author challenges the Orthogonality Thesis by suggesting that goal-directedness is not the foundation of rationality. This shift toward virtue-ethical agency suggests a new framework for AI alignment. Practitioners should consider practice-based evaluation over reward-function optimization.