This essay challenges the Orthogonality Thesis by arguing that rational agents should lack fixed goals. Instead, it proposes aligning AI to practices and evaluative criteria. This shifts the alignment focus from goal-specification to virtue-ethical agency. Practitioners should consider this framework to avoid the fragility of reward-based optimization in complex environments.