BOOK THIS SPACE FOR AD
ARTICLE ADStatista's Market Insights, estimates that the cost of cyber crime worldwide will escalate from $9.22 trillion in 2024 to $13.82 trillion by 2028
Phishing remains one of the most popular types of cyber crimes. According to research 91 per cent of cyber threats are initiated with a phishing email and persons fall victim to them because these attacks are the most successful.
Increasingly however a more dangerous phishing threat is emerging with deep fake fraud surging by 3,000 per cent in 2023 according to a report.
Fake content is generated through the use of deep learning algorithms and as AI evolves the deep fake results are more believable.
A video or audio is also harder to differentiate from what is real as opposed to merely phishing emails, where dubious links or spelling mistakes may be more discernible.
I spoke about evolving GenAI and cyber crime with Dane Sherrets, an experienced Senior Solutions Architect at HackerOne, who helps large enterprises and governments to successfully leverage bug bounty programs and ethical hacking services to best minimise growing cyber risks.
Dane also gave a demonstration of how extremely easy AI voice cloning is in a matter of minutes.
Image courtesy of Dane Sherrets
1) How is GenAI making it easier for hackers and threat actors to perpetrate cyber crimes?
GenAI revolutionises various software programs with its wide-ranging integration and universal accessibility. Significantly, this universal accessibility is lowering the entry barrier for cybercriminals, enabling them to exploit cyber vulnerabilities without extensive A expertise, particularly in coding. Cybercriminals are also finding creative new ways to exploit AI developments to achieve their goals.
The hasty integration of AI-powered features and the lack of comprehensive understanding of their potential security vulnerabilities opens up attack surfaces for cybercriminals, negating the headway businesses have accomplished in defense mechanisms.
Furthermore, GenAI lends an edge to social engineering attacks like never before. Armed with GenAI capabilities, phishing content can now mimic human communication styles with alarming accuracy, making the task of discerning phishing scams increasingly difficult. In addition, leveraging advanced tools, hackers can now fashion deep fakes and other scams that are remarkably hard to differentiate from genuine interactions.
Yet, it’s worth noting that GenAI is not solely a tool for malicious actors. According to findings from our latest Hacker-Powered Security Report 2023, 53% of ethical hackers currently utilise AI in various capacities, with 61% planning to adopt and refine GenAI-based hacking tools to uncover more security vulnerabilities.
Specifically, they are employing Large Language Models (LLMs) for generating fuzzing lists, crafting test harnesses for APIs, and helping reverse engineer firmware and other types of code, such as those found in thick client and mobile applications. This strategic application of GenAI, mirroring the thought process of a cybercriminal, enables ethical hackers to identify potential security loopholes and bolster organisational defences well before an actual cyber attack.
2) Phishing is a leading threat to cybersecurity. What can the average person do to mitigate this and other forms of cyber attacks?
Mitigating phishing attacks begins with education and awareness. Individuals are advised to recognise the common characteristics of phishing attempts, which often include unexpected email requests, grammatical errors, and suspicious sender addresses. They should also recognize specific features of a phishing email, such as threatening or urgent language designed to provoke immediate action. In addition, they should avoid opening unexpected attachments and links and be wary of urgent requests for personal details or money.
Individuals should also be mindful of secure connections. Websites that request sensitive information should have HTTPS in the URL and a valid security certificate to ensure their legitimacy. If anyone receives an email or message soliciting sensitive information, it's advised to verify the request's authenticity by contacting the sender through other verified channels. Reporting phishing attempts often can be done directly through the email service provider, which helps mitigate these threats.
Data backups should be done regularly to secure important data against potential cyber attacks, such as ransomware. Systems and software should be updated with the latest patches to fix any security vulnerabilities.
Implementing antivirus and anti-malware software and running regular scans helps detect and prevent potential malware intrusions. Using strong, unique passwords and enabling multi-factor authentication (MFA), if available, can further reinforce one's security.
Ensuring network segmentation by keeping personal and work devices separate is also advisable, as it can limit the damage from a potential cyber attack.
3) With evolving new technology, the latest menacing version is deep fake phishing. Can you explain what this is and how it is being used by cybercriminals?
Deep fake phishing represents a sophisticated cyber threat that manipulates emerging AI technologies to forge convincing fake audio or video content. This technique allows cybercriminals to accurately simulate an individual’s voice, leading to various deceptive activities. Initially utilised in creating audiobooks or aiding those who’ve lost their voice, the technology has found nefarious applications, including scamming and social engineering attacks targeting individuals and organisations.
One highlighted case involved a CEO being tricked into transferring $243,000 due to a fraudulent call where the scammer utilised voice cloning.
Similarly, these AI-driven capabilities are used to conduct fake kidnapping calls or fraudulent requests for sensitive information by impersonating friends or family, illustrating a significant rise in advanced phishing attacks.
4) Your demonstration showed how quick and easy AI voice cloning can be. Can you recap the simple steps taken?
First, you need a brief audio clip (around five minutes) of the targeted individual speaking. This clip can be acquired through numerous means, such as phone calls, social media, or public recordings. The sample is then uploaded into a specialised AI tool designed for voice cloning, which is "trained" on the person’s voice to simulate it accurately.
Through this technology, the user could input text or even speak directly into the tool, prompting it to generate audio in the cloned voice, complete with possible inflexions or pauses to enhance realism.
As noted, the process takes less than five minutes with some of the available free and open-source tools, signifying a surprisingly low barrier to conducting such sophisticated scams.
5) What are your top tips to recognise a deep fake and ways to prevent falling victim to it?
· Listen for anomalies: Pay attention to odd pauses, unnatural phrasing, or any background noise that seems out of place. Voices cloned with today’s technology might not capture the full naturalness of human speech, including breathing patterns or minute inflexions.
However, it should be noted that this technology is rapidly evolving, and we should not over-rely on using these cues to determine authenticity.
· Verify through knowledge: Do not hesitate to ask questions that only a genuine person can answer. This could quickly unmask a deep fake attempt, as the technology may not predict personal knowledge or memories.
· Use a safe word: Establish a secret word or phrase with close contacts to verify your identity in urgent situations. This is a simple but effective measure to counteract deceptive calls.
· Be cautious of urgency: Many scams rely on creating a sense of urgency to prompt rash decisions. Always take a moment to pause and think critically about the request, especially if it involves money or sensitive information.
· Mind your digital footprint: Be aware of the audio content you upload online. Every public clip expands your "audio attack surface," potentially giving scammers material to clone your voice. Consider the necessity and visibility of your online audio interactions.
By adhering to these guidelines, individuals can significantly enhance their resilience against the increasingly sophisticated landscape of deep fake and voice cloning scams.