Generative AI Makes Social Engineering More Dangerous—and Harder to Detect

Generative AI Makes Social Engineering More Dangerous—and Harder to Detect


According to the 2025 IBM X-Force Threat Intelligence Index, threat actors today are, on average, pursuing bigger, broader campaigns than they have in the past. This development is partly a matter of changing tactics, as many attackers have shifted their focus to supply-chain attacks that affect many victims at once.

But it is also a matter of changing tools. Many attackers have adopted generative AI like an intern or assistant, using it to build websites, generate malicious code and even write phishing emails. In this way, AI helps threat actors carry out more attacks in less time. 

“The AI models are really helping attackers clean up their messages,” Carruthers says. “Making them more succinct, making them more urgent—making them into something that more people fall for.”

Carruthers points out that bad grammar and awkward turns of phrase have long been among the most common red flags in phishing attempts. Cybercriminals tend not to be assiduous with spellcheck, and they’re often writing in second and third languages, leading to a higher volume of errors overall.  

But generative AI tools can generate technically perfect prose in virtually all major world languages, concealing some of the most obvious social-engineering tells and fooling more victims. 

AI can also write those messages much faster than a person can. Carruthers and the X-Force team did some experiments and found that generative AI can write an effective phishing email in five minutes. For a team of humans to write a comparable message, it takes about 16 hours, with deep research on targets accounting for much of that time. 

Consider, too, that deepfake technology allows AI models to create fake images, audio and even video calls, lending further credibility to their schemes.    

In 2024 alone, Americans lost USD 12.5 billion to phishing attacks and other fraud. That number might rise as more scammers use generative AI to create more convincing phishing messages, in more languages, in less time. 

And with the arrival of AI agents, fraudsters can scale their operations even further.



Source link