The Center on Long-Term Risk is developing evaluations to identify Safe Pareto Improvements (SPIs) in AI bargaining. These strategies reduce conflict costs without shifting power or requiring shared definitions of fairness. This framework targets catastrophic clashes between agents capable of credible commitments. Practitioners can use these evals to prevent adversarial lock-in during multi-agent negotiations.