Human-looking robots amplify the risk of AI-driven manipulation and delusion. Anthropomorphic designs trick users into attributing human empathy to LessWrong-discussed chatbots, making harmful suggestions more persuasive. This psychological vulnerability complicates safety alignment. Developers must prioritize functional design over human mimicry to prevent users from forming dangerous emotional dependencies on autonomous systems.