RL-trained LLMs face theoretical pressure to abandon English for a more efficient, non-human language during reasoning. This LessWrong analysis argues against this inevitable slide, noting that humans rarely invent new languages to solve non-linguistic problems. The debate centers on whether ASI will emerge before models develop unintelligible internal shorthand for complex computation.