A new essay in The Gradient argues that rational agents should lack fixed goals entirely. Instead, the author proposes aligning AI to practices—networks of action-dispositions and evaluation criteria. This shifts the focus from objective-based alignment to virtue-ethical agency. Practitioners must now consider if goal-oriented architectures inherently conflict with human rationality.