The 8088 The 8088 ← All news
OpenAI AI Safety Apr 23

GPT-5.5 Bio Bug Bounty

★★★★★ significance 3/5

OpenAI has announced a bug bounty program focused on identifying universal jailbreaks related to biological safety risks in GPT-5.5. Participants can earn rewards of up to $25,000 for discovering these vulnerabilities.

Why it matters Proactive testing for biological safety risks signals a shift toward securing frontier models against high-stakes physical world threats.
Read the original at OpenAI

Entities mentioned

OpenAI

Tags

#red-teaming #biosafety #bug bounty #jailbreaking

Related coverage