Rewards up to $25,000 await researchers who find universal jailbreaks for biological safety risks in GPT-5.5. This red-teaming challenge targets specific vulnerabilities that could enable the creation of harmful pathogens. OpenAI aims to patch these gaps before a wider release. Practitioners should monitor the resulting safety benchmarks for model robustness.