A new proposal replaces single distant forecasts with a chain of short-horizon predictions. By rewarding models for predicting their own future outputs, AI Alignment Forum researchers aim to prevent myopic behavior in LLMs. This recursive method ensures verifiable intermediate rewards. Practitioners can use this to elicit more accurate long-term forecasts.