Rational agents lack fixed goals, according to a new essay in The Gradient. The author argues that AI alignment should shift from goal-seeking to the adoption of virtuous practices. This approach replaces rigid objective functions with networks of action-evaluation criteria. Practitioners must now consider how habit-based frameworks prevent the risks of goal-directed optimization.