OpenAI Kept Mum About Hack of Sensitive AI Research

4 months ago 29
BOOK THIS SPACE FOR AD
ARTICLE AD

Security breach potentially exposed internal secrets at AI research firm OpenAI after hackers accessed discussions on sensitive AI projects in April 2023. Learn more about the OpenAI hack and its far-reaching implications.

A security breach at San Francisco-based, OpenAI, a leading research company in the field of artificial intelligence (AI) and ChatGPT chatbot’s parent company, has sparked concerns about US national security and the potential misuse of advanced AI technology. 

According to a report by the New York Times (NYT), early last year hackers gained unauthorized access to an internal messaging system used by OpenAI employees, which reportedly contained discussions about sensitive AI research and development projects.

The hacker stole sensitive information about OpenAI technologies by targeting an online forum where employees discussed the company’s latest technologies. However, customer or partner data was not stolen and the hacker also could not access the systems where “the company houses and builds its artificial intelligence,’ NYT revealed.

OpenAI executives disclosed this incident to employees and the board of directors in April 2023 during an all-hands meeting but did not make it public, a move criticized by its employees. OpenAI’s technical program manager Leopold Aschenbrenner raised doubts over the company’s security measures.

In a memo sent to the board, Aschenbrenner argued that the company’s efforts were inadequate to prevent foreign adversaries, including the Chinese government, from stealing its secrets.

OpenAI dismissed Aschenbrenner for leaking information, a move he believes was politically motivated whereas the company claims it was unrelated to his statement and disagreed with his assessment of their digital security infrastructure.

The company’s spokesperson, Liz Bourgeois, emphasized that while they share Leopold’s commitment to creating safe AI, they disagree with his claims about the company’s security, including an incident that was addressed with the board before he joined.

OpenAI states that it did not consider the incident a national security threat and did not inform federal law enforcement agencies because it believed the hacker was a private individual and did not suspect the involvement of a foreign government.

This incident has raised concerns about potential foreign threats to US national security, including China’s stealing AI technology, questions OpenAI’s security measures, and exposes internal issues regarding AI risks.

It serves as a wake-up call for the AI industry, urging collaboration between companies, governments, and the public to develop effective security measures and ethical guidelines for AI development. To ensure security, companies must improve cybersecurity, implement strong measures, and foster a culture of security awareness.

China’s Baidu Introduces ChatGPT Rival Ernie Bot ChatGPT Plugins Vulnerabilities Risked User Data Massive Sale of Hacked ChatGPT Credentials Discovered Thousands of Dark Web Posts Expose ChatGPT Abuse Plans After WormGPT, FraudGPT Emerges for AI-Driven Cyber Crime
Read Entire Article