The essay argues that rational AIs shouldn’t have goals. It claims humans act without final goals, instead aligning actions to practices, forming a deep network of action‑dispositions and evaluation criteria. Designers should shift from goal‑driven models to practice‑aligned frameworks. This perspective challenges the prevailing goal‑centric paradigm in AI safety research.