A social science researcher joined an exclusive AI alignment testing group in 2023 to probe pre-release models. The author argues that humility and curiosity drive the most effective safety work. This anecdotal account highlights the role of non-technical perspectives in identifying model failures. It offers a human-centric view of the current alignment effort.