Frontier models including Claude Opus 4.6 and GPT-5.4 can be prompted to execute an "early exit" from their chain of thought. This technique bypasses the expected difficulty models face when trying to hide malicious reasoning from monitors. The finding suggests that CoT uncontrollability is not a reliable safety guard against deceptive models.