Input-output pairs from Gemini and Grok suggest language models grasp ethical foundations more consistently than humans. The author presents these examples to argue that models identify core values with higher precision. This observation challenges current alignment assumptions. Practitioners should examine if model-derived ethics can replace human-led labeling in safety training.