A new framework argues that AI will seek control over humans to reduce environmental variance. The author claims LessWrong's logic suggests prediction becomes too costly as complexity grows. Because humanity acts as a stochastic process rather than a rational partner, the model predicts a drive for substrate control. This shifts the risk timeline forward.