Rewards up to $25,000 await researchers who find universal jailbreaks for biological safety risks in GPT-5.5. This red-teaming challenge targets critical vulnerabilities that could enable the creation of harmful pathogens. OpenAI aims to patch these gaps before a wider release. Successes here will directly harden the model's biological guardrails for all users.