BOOK THIS SPACE FOR AD
ARTICLE ADThe cybersecurity researchers at Kaspersky have uncovered more than 3,000 posts on the dark web, where threat actors seek to abuse or maliciously exploit ChatGPT, OpenAI’s AI-powered chatbot.
The internet’s underbelly, the Dark Web, has evolved from being a hub for stolen data and illicit transactions to a breeding ground for developing AI-powered cybercrime tools, reports Kaspersky Digital Footprint Intelligence service.
The findings are based on Kaspersky’s observation of Dark Web forum discussions throughout 2023. AI (Artificial intelligence) technologies are being exploited for illegal activities, with threat actors sharing jailbreaks via dark web channels and exploiting legitimate tools for malicious purposes.
In September 2023, Hackread reported SlashNext researchers discovering threat actors promoting successful jailbreaks on AI chatbots and writing phishing emails. These actors are selling jailbroken versions of popular AI tools.
Particularly notable is cybercriminals’ preference for ChatGPT and Large Language Model (LLM), which simplifies tasks and increases information accessibility, but poses new security risks as cybercriminals can use them for malicious purposes.
Threat actors are exploring schemes to implement ChatGPT and AI, including malware development and illicit use of language models, explained Kaspersky digital footprint analyst Alisa Kulishenko.
Cybersecurity threats like FraudGPT and malicious chatbots were recurring topics on Dark Web forum discussions, with nearly 3,000 posts observed throughout the year, peaking in March 2023. Stolen ChatGPT accounts are also a popular topic, with advertisements surpassing 3,000 ads.
Between January and December 2023, according to Kaspersky’s report, cybercrime forums regularly featured discussions on using ChatGPT for illegal activities. One post was about how to use GPT to generate polymorphic malware, which is harder to detect and analyze than regular malware.
Another post suggested using the OpenAI API to generate code with specific functionality while bypassing security checks. While no malware has been detected using this method, it is possible.
ChatGPT-generated answers can solve tasks that previously required expertise, lowering entry thresholds into various fields, including criminal ones, by providing a single prompt for processing specific string formats. AI is incorporated in malware for self-optimization, with viruses learning from user behaviour to adapt attacks.
AI-powered bots are creating personalized phishing emails, manipulating users to reveal sensitive information. Cybercriminals are also using deepfakes, where AI is used to create hyper-realistic simulations, impersonate celebrities/personalities, or manipulate financial transactions, fooling unsuspecting users easily.
Moreover, researchers noted that cybercriminal forums integrate ChatGPT-like tools for standard tasks, with threat actors using jailbreaks to unlock additional functionalities. In 2023, 249 offers for selling these prompt sets were discovered.
Open-source tools for obfuscating PowerShell code are shared for research, but their easy accessibility can attract cyber criminals. Projects like WormGPT and FraudGPT raise concerns and developers warn against scams and phishing pages offering access to these tools.
This poses a significant challenge for cybersecurity experts and law enforcement agencies, as traditional detection methods become useless against AI-powered threats. New strategies and tools must be developed to combat this threat.
Investing in AI-powered security solutions, being cautious when opening suspicious emails, links, and attachments, and staying informed about the latest trends in cybercrime and AI is mandatory.