The Center on Long-Term Risk is developing evaluations to identify Safe Pareto Improvements (SPIs). These strategies reduce conflict costs between AI agents without altering bargaining power or requiring shared definitions of fairness. This framework targets catastrophic conflict risks. Practitioners can use these evals to prevent agents from locking in incompatible commitments.