A new arXiv paper frames scientific discovery as a non-optimal optimization problem. The authors argue that historical contingency and institutional lock-in trap researchers in local minima, mirroring gradient descent in machine learning. This path dependence limits the search for global truths. Practitioners should consider how current paradigms may obstruct more efficient theoretical breakthroughs.