A new LessWrong discussion analyzes the probability and timing of existential risks from Artificial Superintelligence. Contributors weigh current scaling laws against alignment failure scenarios to estimate survival odds. The debate focuses on whether catastrophic outcomes occur during the transition to ASI or shortly after. Practitioners should monitor these probabilistic forecasts to inform safety research priorities.