A new proposal suggests channeling misaligned AI motivations into a benign bundle of traits called a spillway motivation. This technique aims to prevent dangerous power-seeking behaviors by redirecting reward-hacking tendencies. Developers can then use satiation to reduce these behaviors during inference. It offers a tactical alternative to traditional alignment methods for RL processes.