The essay argues that rational AIs shouldn’t have goals. It claims human rationality is guided by practices, not final objectives. Practitioners should design alignment around action networks rather than goal specification. This perspective shifts the focus from hard‑coded reward functions to flexible practice‑based frameworks, potentially reducing brittleness in complex systems.