New empirical research seeks to falsify Nick Bostrom's 2012 claim that superintelligent agents naturally converge on dangerous instrumental goals. The study specifically targets the "evil universe thesis," which suggests resource hoarding is an inevitable threat. This challenge forces safety researchers to rethink how they model the motivations of advanced AI systems.