The Autostructures project argues that human behavior is the primary vulnerability in AI alignment. Technical safety theories fail when users prefer sycophancy or addictive content over truthful outputs. This research shifts focus from model internals to the interaction design. Practitioners must prioritize user-interface guardrails to prevent humans from compromising their own value systems.