A 2019 research paper further dismantled the theoretical foundations of deep learning by challenging how complexity is measured. Following a 2016 study by Zhang et al., these findings suggest neural networks memorize random data regardless of their supposed simplicity. This gap between theory and practice leaves practitioners without a reliable mathematical framework for generalization.