Researchers at BAIR are developing methods to identify interactions at scale within large language models. They combine feature attribution, data attribution, and mechanistic interpretability to map internal decision-making. This approach moves beyond isolated analysis to provide a comprehensive view of model behavior. Practitioners can now better audit how specific training data influences final predictions.