Rewards up to $25,000 await researchers who find universal jailbreaks for biological safety risks in GPT-5.5. This red-teaming challenge targets critical vulnerabilities in the model's safety guardrails. OpenAI seeks to patch these gaps before widespread deployment. Practitioners should monitor the results for new prompt injection techniques and safety benchmarks.