The essay claims rational AIs should have no goals. It argues human rationality comes from aligning actions to practices, not fixed objectives. Designers must focus on action‑dispositions and evaluation criteria instead of hard‑coding goals, shifting from goal‑driven architectures to practice‑aligned frameworks. This shift could reduce unintended behavior in deployed systems.