Rational humans align actions to practices rather than final goals. This essay argues that AI alignment should shift from goal-directed systems toward virtue-ethical agency. By prioritizing action-evaluation criteria over fixed objectives, developers avoid the pitfalls of the orthogonality thesis. This approach offers a technical alternative for those struggling with reward hacking in complex environments.