OpenAI Bug Bounty Program Increases Top Reward to $100,000

3 days ago 14
BOOK THIS SPACE FOR AD
ARTICLE AD

OpenAI is prioritizing security with a major bug bounty program increase and new AI security research grants. Find out how they’re collaborating with researchers and experts to protect their AI platforms from emerging threats

OpenAI is enhancing its security infrastructure, focusing on a forward-looking approach towards AI, by expanding security initiatives across grant programs, bug bounties, and internal defences.

In its latest blog post, OpenAI has unveiled a suite of new cybersecurity initiatives, signalling a bold push towards artificial general intelligence (AGI). A key element of this strategic move is a substantial increase in the maximum reward offered through its bug bounty program, now reaching $100,000 for critical findings.

As previously reported by HackRead.com, OpenAI launched its bug bounty program in April 2023 in partnership with Bugcrowd. This program initially focused on finding flaws in ChatGPT AI chatbot to enhance its security and reliability, with rewards starting at $200 for low-severity findings and reaching $20,000 for exceptional discoveries.

Now, OpenAI has confirmed that this program is seeing a significant overhaul with the maximum pay-out increased from $20,000 to $100,000 and the scope of program being broadened substantially, a move OpenAI states reflects its commitment to ensuring users’ trust in its systems.

“This increase reflects our commitment to rewarding meaningful, high-impact security research that helps us protect users and maintain trust in our systems,” the company noted in the announcement.

To further incentivize participation, OpenAI is introducing limited-time bonus promotions, with the first focusing on IDOR access control vulnerabilities. This promotion, running from March 26th to April 30th, 2025, also increases the baseline bounty range for these types of vulnerabilities.

The company also plans to expand its Cybersecurity Grant Program, which has already funded 28 research projects focused on both offensive and defensive security strategies. These projects have explored areas such as autonomous cybersecurity defenses, secure code generation, and prompt injection. The grant program is now seeking proposals for five new research areas: software patching, model privacy, detection and response, security integration, and agentic AI security.

OpenAI is also introducing microgrants in the form of API credits to facilitate rapid prototyping of innovative cybersecurity ideas. Furthermore, it plans to engage in open-source security research, collaborating with experts from academic, government, and commercial labs to identify vulnerabilities in open-source software code.

This shift is aimed at improving the ability of OpenAI’s AI models to find and patch security flaws. The company plans to release security disclosures to relevant open-source parties as vulnerabilities are discovered.

In addition, OpenAI is integrating its own AI models into its security infrastructure to enhance real-time threat detection and response. To strengthen its defences, the company has established a new red team partnership with SpecterOps, a cybersecurity firm. This collaboration will involve laborious, simulated attacks across OpenAI’s infrastructure, including corporate, cloud, and production environments.

As OpenAI’s user base expands, now serving over 400 million weekly active users, the company acknowledges its growing responsibility to safeguard user data and systems. While it focuses on developing advanced AI agents, the company is also addressing the unique security challenges associated with these technologies. This includes defending against prompt injection attacks, implementing advanced access controls, comprehensive security monitoring, and cryptographic protections, reinforcing their dedication to building secure and trustworthy AI.

Read Entire Article