The essay argues that rational AIs should not have goals. It claims that human rationality aligns actions to practices, not to final objectives. Practitioners should therefore design AI systems around action‑dispositions and evaluation criteria. This shift moves the focus from goal‑driven optimization to virtue‑ethical agency. Such a framework could reduce misaligned incentives in complex deployments.