Claude Opus 4.7 successfully completed a fill-in-the-blank Ancient Greek exercise on the first attempt. Previous versions failed without specific prompting, despite possessing the internal knowledge. This result suggests that newer model iterations exhibit better unsupervised elicitation of latent facts. Practitioners should note the improved reliability in retrieving niche linguistic data without complex prompt engineering.