The essay claims rational AIs should lack goals. It argues that human actions are guided by practices rather than objectives. This perspective challenges conventional goal‑oriented alignment methods, urging practitioners to rethink objective design. By focusing on action networks and evaluation criteria, the paper suggests a shift toward practice‑based frameworks. Such a shift could reduce misaligned incentives in complex systems.