Input-output pairs from Gemini and Grok suggest language models grasp foundational ethics more consistently than many humans. The author presents these findings via a series of prompt-response tests. This anecdotal evidence highlights a gap between perceived and actual model alignment. Practitioners should evaluate if these capabilities emerge from training data or genuine reasoning.