This essay argues that rational agents lack fixed goals, suggesting AI alignment should shift toward practices rather than objective functions. The author challenges the Orthogonality Thesis by linking rationality to action-evaluation criteria. This theoretical pivot suggests practitioners should focus on behavioral dispositions instead of static goal-setting to ensure safe system behavior.