Technology

Social Engineering Attacks Get AI Boost, Show No Signs of Slowing Down

ISACA’s 2022 State of Cybersecurity Report shows that social engineering continues to be the leading type of cyber attack analyzed in 2022. This is not surprising given how useful this attack is to cybercriminals. Also, several sources say that around 9 in 10 attacks involve some form of social engineering. This attack focuses on people, who remain to be the weakest link in the cybersecurity chain.

With the advent of more advanced artificial intelligence, generative AI in particular, the social engineering problem has become even more dangerous. AI serves as a tool to launch more aggressive attacks or come up with more sophisticated tactics. It is boosting the effectiveness of already highly effective attacks.

Just recently, Microsoft admitted that its Teams platform became the subject of a social engineering attack, which has reportedly affected around 40 organizations worldwide. Of note, this attack did not involve any AI. It was the usual human-targeted attack meant to take advantage of human negligence and sloppiness, but it succeeded in penetrating the cyber defenses of one of the biggest IT organizations in the world. Imagine the threat AI-bolstered social engineering can bring.

Growing AI threats

It may sound ironic but AI is now playing a big role in attacking the human mind. What used to be an attack that necessitated careful human analysis and estimation of human nature is now amplified by artificial intelligence.

Social engineering attacks used to be a game of reciprocity, commitment, social proof, authority, bandwagoning, and other attributes that involved people exploiting other people’s habits, predispositions, tendencies, and responses to certain inputs. This attack used to take time and a lot of patience on the part of the perpetrators since they usually had to wait and see how their targets would respond.

With the availability of more advanced AI, social engineering becomes radically faster, more expansive, and powerfully anticipatory. Threat actors can now automate their search for vulnerabilities and the attack itself. Advanced natural language processing, a form of AI that understands and analyzes natural human language (written and spoken), is used to examine interactions among people to spot opportunities for an attack.

Additionally, AI makes it easier to scale up social engineering attacks. Generative AI, specifically, accelerates the generation of personalized attacks to conduct tons of attacks and reach more targets. It maintains the “personalized” approach that makes social engineering work while taking attacks on a mass scale. One study says that social engineering attacks have increased by 135 percent because of generative AI.

AI-powered harpoon whaling

One of the social engineering attacks that notably uses AI is harpoon whaling or the deception of the “big guns” (like business executives and directors), who usually have access to more sensitive information and enterprise resources. This is a highly targeted attack that requires extensive research about the targets and the perfect timing.

Artificial intelligence makes it easier to collect information about potential targets and analyze the resulting data to formulate an efficient harpoon whaling plan. AI can also generate the attack text or script personalized for specific targets. All processes are accelerated for the attack to be launched as soon as possible or as soon as the perfect opportunity is spotted.

Harpoon whaling is different from phishing because it focuses on crafting a highly believable or convincing email or other forms of content for a specific person who holds high-level authority in an organization. In contrast, phishing usually involves just one or a few emails or content to be transmitted to myriad targets. Harpoon whaling takes advantage of the speed and personalization AI affords.

The success rate of harpoon whaling is generally higher than that of phishing. With AI, threat actors can find more success because of the scaled-up volume of attacks and faster gathering and analysis of data.

AI-driven romance scams

Even romance scams are getting the AI upgrade nowadays. Dating or romance websites are now becoming hotbeds for scammers who leverage AI for their schemes. 

One of the common uses of AI for romance scams is the generation of profile photos. Previously, scammers stole the photos of existing social media or dating site users. Over time, this approach has become less viable because of image search tools that are capable of finding the original owner or poster of photos. With the help of generative AI, threat actors can come up with photorealistic portraits that can be used in setting up fake accounts. These AI-produced images can even be used as profile images or added to an account’s media gallery to make the fabricated account appear more credible.

Aside from the images, natural language processing technology also comes in handy in engaging target victims. Perpetrators do not have to be charismatic themselves to pull off a romance scam. They can rely on AI to write the responses they can use when communicating with potential victims.

Deep fakes

Alarms have been raised with the rise of deep fakes over the past couple of years. There were fears that it could be used not only to target specific individuals but to influence public opinion and influence the results of democratic processes like elections.

In an NPR interview, digital forensics expert Hany Farid of the University of California-Berkeley noted that there are still noticeable flaws in existing deep fake generation technologies. However, technology is advancing rapidly and is becoming more accessible to everyone. The imperfections are not that much of a drawback, since these flaws are often overlooked by audiences, especially those who feel that the deep fakes they have seen align with their biases or validate their misinformed opinions.

There are so-called deep fake detectors, but as several tech experts have observed, they are not that effective. Their accuracy is questionable. A Facebook-initiated challenge to develop a deep fake detection algorithm yielded a winner that was only capable of detecting around two-thirds of the deep fakes thrown at it.

Voice cloning

Moreover, AI is making it easy for anyone to clone voices. Many web-based solutions enable just about anyone to clone voices. This can serve as a cunning tool for phone scammers who now have a way to contact unsuspecting seniors or not-so-tech-savvy family members and speak to them in the voice of their family members or friends.

If scammers were able to victimize people without any vocal modification technology, it’s frightening to think how more effective their schemes would be as they gain access to voice cloning tech. 

Evolving and increasingly persistent threat

As AI grows its adeptness in understanding and mimicking human communication, social engineering becomes exponentially more dangerous. The human ability to distinguish falsities from truths or bots from humans diminishes as AI inches closer to approximating human intelligence and faculties. Social engineering is unlikely to disappear, so everyone must evolve their ability to identify this attack and put in place appropriate prevention and mitigation measures.

Back to top button
Close