The essay argues that rational agents lack fixed goals, favoring alignment with practices instead. This perspective challenges the Orthogonality Thesis by suggesting AI should emulate human virtue ethics. Researchers must shift focus from objective functions to action-evaluation criteria. This theoretical pivot suggests that traditional goal-based alignment strategies are fundamentally flawed.