This essay argues that rational people don’t have goals, and that rational AIs shouldn’t have goals. It claims human action is guided by practices—networks of dispositions and evaluation criteria—rather than final objectives. By redefining alignment around practice, the authors sidestep goal‑driven traps that can mislead AI design. Practitioners should consider practice‑based metrics when assessing AI behavior.