This essay argues that rational agents should align actions to practices rather than fixed goals. The author challenges the Orthogonality Thesis by applying virtue ethics to machine intelligence. It suggests that goal-oriented frameworks create alignment risks. Practitioners should consider practice-based evaluation criteria to ensure AI agents behave rationally without relying on rigid objective functions.