The research analyzes LLM interactions at scale. By combining feature attribution, data attribution, and mechanistic interpretability, it maps how inputs, training data, and internal components influence outputs. Practitioners can use these insights to design safer, more transparent models. The approach also highlights interaction patterns that can flag potential biases early in development.