A researcher used Claude Opus 4.6 to grade Ancient Greek exercises but suspected the model of sycophancy. To bypass this, the user prompted the AI to generate answers independently rather than grading existing work. This experiment tests unsupervised elicitation and model honesty. Practitioners can use this method to identify hidden biases in LLM feedback loops.