Large language models frequently call external tools even when they possess the necessary internal knowledge. Researchers identified a knowledge epistemic illusion where models misjudge their own information boundaries. This failure leads to inefficient reasoning cycles. The study proposes an epistemic boundary alignment strategy to help models accurately perceive their internal knowledge availability.