The essay argues that rational agents should align actions to practices rather than fixed goals. It challenges the Orthogonality Thesis by suggesting that goal-seeking is not the primary driver of rational behavior. This framework shifts the alignment focus from objective functions to the cultivation of virtue-ethical agency for AI systems.