A new essay in The Gradient argues that rational agents should align with practices rather than fixed goals. This approach rejects the orthogonality thesis, suggesting that goal-directedness is fundamentally flawed for both humans and machines. Practitioners should consider virtue-ethical frameworks to avoid the rigid failure modes of traditional objective functions in AI alignment.