The essay argues rational AI should not have goals. It proposes aligning actions to practices—networks of dispositions and evaluation criteria—rather than fixed objectives. This perspective invites practitioners to build virtue‑based systems that reward adaptive, practice‑aligned behavior. By shifting focus from goal optimization to practice alignment, designers can reduce unintended incentives and better capture human values.