Large language models frequently call external tools even when they possess the necessary internal knowledge. Researchers identified a knowledge epistemic illusion where models misjudge their own knowledge boundaries. This pervasive failure leads to inefficient reasoning. The authors propose a boundary alignment strategy to help models accurately perceive their internal capabilities.