The first principal component of hidden state activations in the CoDI Llama 3.2 1B model correlates strongly with the end-of-chain-of-thought token. This finding follows tests showing activation steering works with KV_cache but fails with hidden states. The result provides a specific mechanistic target for researchers interpreting latent reasoning in GSM8K datasets.