As artificial intelligence technology becomes more advanced and easier to access, so do the scams that use it. Phishing scams, which use emails or social media messages to impersonate a trustworthy source, attempt to extract sensitive data from an unsuspecting person. Now armed with AI bots and personalized information, these phishing schemes are harder to detect, leading to an unprecedented increase in cybercrime.
Scammers can use AI bots to tirelessly scrape social media for company or individual details. These computer programs can collect vast amounts of data about a person’s social media habits, including topic interests and associations. Armed with specific details, cybercriminals can readily create effective large-scale phishing schemes that mimic the tone and personality of a real person. Most significant is the recent uptick in phishing schemes targeting corporate executives.
“This is getting worse and it’s getting very personal, and this is why we suspect AI is behind a lot of it,” said Kirsty Kelly, chief information security officer with British insurer Beazley, per the Financial Times. “We’re starting to see very targeted attacks that have scraped an immense amount of information about a person.”
Using AI To Combat Cybercrime Like Phishing Scams
Cybercriminals use phishing tactics because they often work to steal information. Per the U.S. Cybersecurity and Infrastructure Security Agency, at least 90% of successful security breaches start with a phishing scheme.
With AI, many of these phishing emails sneak past the cybersecurity software designed to protect users. Basic filters, which can detect an exact copy of an email sent numerous times, just can’t keep up with AI’s ability to quickly generate countless variations of a message, each capable of mimicking human writing tone and style.
Just as AI helps crooks, it can also assist companies in beating them at their own game. Developers are working hard to create AI bots just as sophisticated and smart as the tech criminals are using. These bots learn to detect AI-written content and code and then stop them before they get a chance to unleash an attack.
AI tools designed to detect a cyberattack aren’t just for big companies, either. Even small businesses that lack a team of security experts can utilize powerful AI software to tamp down breach occurrences.
Just over half of all companies are using some sort of AI to counter cybercrime, according to a survey conducted by PYMNTS. Among the respondents, most expected AI to be fully implemented in the fight against cybercriminals within seven years.
“It is essentially an adversarial game; criminals are out to make money and the [business] community needs to curtail that activity,” said Michael Shearer, chief solutions officer at Hawk, in an interview with PYMNTS.
Employee training is also crucial. Teaching staff members at every level how to spot a phishing email as well as keeping up with the latest tactics could save companies from paying out millions of dollars and reporting embarrassing cyberattack announcements, like the one Stop & Shop suffered in November 2024.