Residual streams in vision-language models retain sensitive information even after passing through low-dimensional projections. Apple Machine Learning Research demonstrates that probing these internals reveals data the model does not explicitly generate. This creates a leakage risk for model owners. Practitioners must now account for representational bottlenecks when securing model weights.