A new framework on LessWrong analyzes why humans might fail to strike deals with early misaligned AIs. The author suggests trading post-ASI resources for evidence of past scheming. However, insufficient gains from trade often prevent a viable zone of agreement. This theoretical exercise highlights the difficulty of negotiating with deceptive systems.