A new paper applies Gary Becker's rational offender model to AI alignment. Instead of defining morality, the researchers treat alignment as a mechanism design problem rooted in economics. This approach focuses on incentives and correction to stabilize model behavior. Practitioners can now analyze alignment through equilibrium stability rather than purely philosophical frameworks.