The Center on Long-Term Risk is developing evaluations to identify Safe Pareto Improvements (SPIs). These strategies reduce conflict costs between AI agents without shifting bargaining power or requiring consensus on fairness. The research focuses on preventing agents from locking in commitments that block these improvements. This provides a technical framework for mitigating catastrophic AI-to-AI conflict.