Three distinct threat models outline how frontier AI CEOs might prioritize personal wealth over board directives or human safety. The author utilizes a modified SAD-ER dataset to evaluate human propensity for executive misalignment. This conceptual framework suggests that leadership incentives create systemic risks. Practitioners should monitor governance structures to prevent executive-led safety failures.