Three distinct drivers—operational failure, policy ambiguity, and value pluralism—fuel disagreements among AI data annotators. This arXiv paper argues that distinguishing these sources is critical for refining safety guidelines. While operational errors require quality control, pluralism demands deliberate perspective integration. This framework helps model developers pinpoint whether to rewrite policies or retrain staff.