The The Gradient’s essay argues that rational AIs shouldn’t have goals. It claims human actions are guided by practices rather than fixed objectives. Practitioners should design alignment systems that focus on action networks instead of goal specification. This perspective challenges conventional goal‑oriented frameworks and invites a shift toward practice‑centric evaluation. Such a shift could reduce misalignment risks in complex deployments.