This essay argues that rational agents lack fixed goals, suggesting AI alignment should shift toward practice-based frameworks. Instead of optimizing for a final objective, the author proposes aligning actions with evaluative criteria. This theoretical pivot challenges the Orthogonality Thesis. Practitioners must reconsider whether goal-oriented reward functions are fundamentally flawed for creating stable, rational AI systems.