ReasoningBank provides a structured dataset of agent trajectories to help models learn from past mistakes. Google AI Research developed this framework to improve how agents refine their logic through experience. It moves beyond static prompting toward iterative self-improvement. Practitioners can now train agents to correct reasoning errors without requiring massive new human-labeled datasets.