A new framework proposes three threat models where frontier AI CEOs exhibit goals divergent from their boards or public safety. The authors introduced the SAD-Executive Reasoning dataset to test situational awareness in leadership. This theoretical exercise highlights the risk of human-level "in-context scheming" within AI labs. Practitioners should monitor governance structures for these specific incentive gaps.