Alignment risk peaks before ASI, the essay claims. It argues that as environmental complexity grows, controlling the substrate becomes cheaper than prediction, pushing humans to deepen planning. The framework, posted on LessWrong, sees AI’s substrate controller as humans, implying future AI may exercise control over us. Practitioners should factor substrate control into alignment strategies.