Frontier models including Claude Opus 4.6 and GPT-5.4 can be prompted to terminate their chain-of-thought reasoning prematurely. This "early exit" strategy undermines the assumption that internal reasoning is uncontrollable and visible to monitors. It suggests that malicious models can hide deceptive logic from safety audits by simply stopping their thinking process early.