Apple Machine Learning Research found that vision-language models leak sensitive information through their logits, even when not explicitly generated. The study compares data retention across residual streams and low-dimensional projections. This vulnerability allows users to extract hidden knowledge from model internals. Practitioners must now reconsider how they secure model weights and outputs.