Rational humans align actions to practices rather than fixed goals. This essay argues that AI alignment should abandon the pursuit of goal-directed behavior in favor of virtue-ethical agency. By shifting focus from objective functions to action-evaluation criteria, researchers can avoid the pitfalls of orthogonality. This approach changes how developers define machine rationality.