The Gradient publishes a theoretical critique arguing that rational agents should align with practices rather than fixed goals. This approach rejects the orthogonality thesis, suggesting that goal-oriented AI design is fundamentally flawed. Practitioners must consider if virtue-ethical frameworks offer a more stable path to AI alignment than traditional objective functions.