OpenAI Boosts Cybersecurity Grants and Bug Bounty Program Amid Data Leak Concerns

March 27, 2025
OpenAI Boosts Cybersecurity Grants and Bug Bounty Program Amid Data Leak Concerns
  • As part of this initiative, a limited-time promotion began on March 26, 2025, allowing researchers to earn up to $13,000 for reporting Insecure Direct Object Reference (IDOR) vulnerabilities.

  • The bug bounty program, initially launched in April 2023 in partnership with Bugcrowd, will now offer significant rewards, including six-figure sums for exceptional findings.

  • OpenAI is collaborating with experts across various sectors to identify skill gaps and improve its models' ability to detect and patch vulnerabilities.

  • Continuous testing is being conducted to provide insights for securing AI systems against various threats, including prompt injection and other malicious manipulations.

  • With over 400 million weekly active users, OpenAI is committed to a proactive and transparent approach to security, addressing challenges like prompt injection attacks.

  • OpenAI is expanding its Cybersecurity Grant program alongside an increase in its bug bounty initiative, which aims to support research projects focused on enhancing AI security.

  • This expansion follows a data leak incident that affected about 1.2% of ChatGPT Plus subscribers, prompting the organization to bolster its security measures.

  • However, the program specifically excludes model safety issues and exploits that allow users to bypass safeguards in ChatGPT.

  • The organization actively monitors for malicious activities targeting its systems and works with other AI labs to enhance collective cybersecurity efforts.

  • Funded projects under the program have already tackled critical issues such as secure code generation and the development of autonomous cybersecurity defenses.

  • This new bounty initiative is part of a broader security strategy that encompasses funding for security research, ongoing adversarial testing, and collaboration with open-source communities.

  • OpenAI's next-generation AI projects are adopting industry-leading security practices, including zero-trust architectures and hardware-backed solutions.

Summary based on 9 sources


Get a daily email with more Tech stories

Sources





More Stories