Rational humans align actions to practices rather than fixed goals. This essay argues AI alignment should shift from goal-directed optimization toward a virtue-ethical framework. By prioritizing action-evaluation criteria over final objectives, developers can avoid the pitfalls of the orthogonality thesis. This approach offers a technical alternative to traditional reward-based training.