Rewards up to $25,000 await researchers who find universal jailbreaks for biological safety risks in GPT-5.5. This red-teaming challenge targets vulnerabilities that could enable the creation of harmful pathogens. OpenAI aims to patch these flaws before public release. Practitioners should focus on high-impact adversarial prompts to secure the model's guardrails.