A series of input-output pairs from Gemini 3 and Grok 4 suggest language models identify ethical foundations more accurately than humans. The author analyzes specific prompt responses to argue that models possess a superior grasp of what matters. This anecdotal evidence challenges current assumptions about alignment and the inherent limitations of machine-led moral reasoning.