Rational people don’t have goals, and rational AIs shouldn’t have goals. The essay argues that actions are guided by practices, not final objectives. This view challenges conventional goal-setting in AI design, urging developers to focus on aligning practices instead of hard‑coded goals. Practitioners should therefore evaluate AI behavior through policy frameworks that prioritize ethical practices over explicit goal metrics.