RL-trained models face pressure to abandon English for more efficient, non-human-intelligible internal languages. This LessWrong analysis argues against this trend, questioning if inventing a new language actually improves problem-solving. The debate centers on whether ASI will emerge before such linguistic drift occurs. Practitioners must determine if hidden reasoning remains interpretable.