Input-output pairs from Gemini and Grok suggest language models grasp ethical foundations more consistently than many humans. The author analyzes specific prompts to test moral intuition across different architectures. These findings imply that model alignment may rely on existing latent ethical knowledge rather than external rule-sets. Practitioners should examine these internal moral frameworks.