A new proposal on the AI Alignment Forum explores how agents can transition from zero-knowledge states to performing morally significant actions. The author analyzes visual experience and perception to define objective moral values. This theoretical approach attempts to solve the alignment problem by grounding agent behavior in concrete, perceivable differences. It remains highly speculative.