A new paper on arXiv argues that scientific discovery mirrors gradient descent in machine learning. It claims historical contingency and institutional rewards trap researchers in local optima rather than global truths. This framework suggests that path dependence limits the scope of current scientific paradigms. Practitioners should consider how cognitive lock-in biases research trajectories.