Rational agents lack fixed goals, according to a new essay in The Gradient. The author argues that AI alignment should shift from goal-directed behavior to practicing networks of action-evaluation criteria. This approach replaces the orthogonality thesis with virtue-ethical agency. Practitioners must now consider if behavioral dispositions outperform static objective functions in complex environments.