Cover, 1965, proved a single hyperplane can separate almost any point set in high dimensions. The study applies this result to neural networks, showing that as dimensionality grows, a threshold neuron behaves like a random classifier. Designers should account for this saturation when scaling generative models, as it can affect training stability and expressiveness.