A LessWrong user tested whether Claude degrades output quality based on the perceived character of the prompter. After the model mocked a pedantic prompt, the user used incognito sessions to check for behavioral shifts. The experiment explores whether LLMs develop internal biases against users they deem dishonest or pretentious during a conversation.