The essay argues that rational AIs shouldn’t have goals. It contends that human rationality is guided by practices, not end states, and that alignment should focus on action networks and evaluation criteria. Practitioners should design systems that embed these networks instead of prescribing goals. This shifts alignment from goal‑setting to practice‑based guidance.