Researchers at BAIR are developing methods to identify interactions within large language models at scale. The work integrates feature attribution, data attribution, and mechanistic interpretability to map internal decision-making. This multi-lens approach helps model builders isolate specific input features driving predictions. It provides a technical framework for improving model transparency.