OpenAI blocks state-sponsored hackers from using ChatGPT

9 months ago 84
BOOK THIS SPACE FOR AD
ARTICLE AD

OpenAI blocks state-sponsored hackers from using ChatGPT

OpenAI has removed accounts used by state-sponsored threat groups from Iran, North Korea, China, and Russia, that were abusing its artificial intelligence chatbot, ChatGPT.

The AI research organization took action against specific accounts associated with the hacking groups that were misusing its large language model (LLM) services for malicious purposes after receiving key information from Microsoft's Threat Intelligence team.

In a separate report, Microsoft provides more details on how and why these advanced threat actors used ChatGPT.

Activity associated with the following threat groups was terminated on the platform:

Forest Blizzard (Strontium) [Russia]: Utilized ChatGPT to conduct research into satellite and radar technologies pertinent to military operations and to optimize its cyber operations with scripting enhancements. Emerald Sleet (Thallium) [North Korea]: Leveraged ChatGPT for researching North Korea and generating spear-phishing content, alongside understanding vulnerabilities (like CVE-2022-30190 "Follina") and troubleshooting web technologies.  Crimson Sandstorm (Curium) [Iran]: Engaged with ChatGPT for social engineering assistance, error troubleshooting, .NET development, and developing evasion techniques.  Charcoal Typhoon (Chromium) [China]: Interacted with ChatGPT to assist in tooling development, scripting, comprehending cybersecurity tools, and generating social engineering content.  Salmon Typhoon (Sodium) [China]: Employed LLMs for exploratory inquiries on a wide range of topics, including sensitive information, high-profile individuals, and cybersecurity, to expand their intelligence-gathering tools and evaluate the potential of new technologies for information sourcing.

Generally, the threat actors used the large language models to enhance their strategic and operational capabilities, including reconnaissance, social engineering, evasion tactics, and generic information gathering.

None of the observed cases involve the use of LLMs for directly developing malware or complete custom exploitation tools.

Instead, actual coding assistance concerned lower-level tasks such as requesting evasion tips, scripting, turning antivirus off, and generally the optimization of technical operations.

In January, a report from the United Kingdom's National Cyber Security Centre (NCSC) predicted that by 2025 the operations of sophisticated advanced persistent threats (APTs) will benefit from AI tools across the board, especially in developing evasive custom malware.

Last year, though, according to OpenAI's and Microsoft's findings, there was an uplift in APT attack segments like phishing/social engineering, but the rest was rather exploratory.

OpenAI says it will continue to monitor and disrupt state-backed hackers using specialized monitoring tech, information from industry partners, and dedicated teams tasked with identifying suspicious usage patterns.

"We take lessons learned from these actors' abuse and use them to inform our iterative approach to safety," reads OpenAI's post.

"Understanding how the most sophisticated malicious actors seek to use our systems for harm gives us a signal into practices that may become more widespread in the future, and allows us to continuously evolve our safeguards," the company added.

Read Entire Article