The essay argues that rational agents should align with practices rather than fixed goals. This approach replaces the traditional Orthogonality Thesis with a virtue-ethical framework. It suggests that AI safety depends on developing action-evaluation criteria instead of static objective functions. Practitioners should consider how this shifts the alignment problem from goal-specification to behavioral disposition.