Text and Social Media Scams Increase 1200%

0

Conversational scams were the fastest-growing mobile threat of 2022. Unlike conventional phishing or malware delivery, these attacks unfold over a series of seemingly benign interactions until the victim’s trust has been won. According to Proofpoint data, the technique saw a 12-fold increase in reporting and was visible across a range of platforms, including SMS, messaging apps and social media.

This growth has seen conversational threats become the highest category of mobile abuse by volume, overtaking package delivery, impersonation and other kinds of fraud in some verticals. Growth in conversational threats shows no sign of slowing and has continued through Q1 2023.

Growth of conversational threats, 2022

Proofpoint covered the emergence of conversational email threats in a blog post last year specifically about the Pig Butchering Crypto fraud. In both email and mobile, the interaction between attacker and victim begins with an apparently harmless message. If the victim takes the bait, the attacker can spend days or even weeks exchanging seemingly innocent texts before attempting to steal information, money or credentials.

Ultimately, these attacks are a manifestation of social engineering. Skilled manipulators take advantage of ubiquitous mobile communications to cast their net wide and land as many victims as they can. In that sense, conversational threats are like advance-fee fraud from the early days of the internet. In those attacks, email addresses would be spammed with offers of a big payday in return for helping a stranger unlock an investment or inheritance. All that’s changed is the delivery mechanism and the fact that the promised pot of gold is now likely to be Bitcoin of Ethereum.

Talk isn’t always cheap

The fact that attackers have adopted conversational lures in email and mobile, and across both financially motivated and state-sponsored attacks suggests that the technique is effective. Society’s receptivity to mobile messaging makes it an ideal threat vector, as we tend to read new messages within minutes of receiving them.

Over the past year, a variety of attack known as “pig butchering” has made headlines, due to the large amounts of cryptocurrency lost by victims. The name “pig butchering” originated in China, but the FBI has recently noted a sharp increase in the number of U.S. victims. Last year, Americans lost more than $3 billion to cryptocurrency scams, of which pig butchering is now a leading example. And of course, romance scams, job fraud and other long-standing forms of conversational attack are still fixtures in the threat landscape.

The conclusion of a romance scam—the Bitcoin wallet had received over $2,500 in Bitcoin at time of writing

In addition to financial losses, these attacks also extract a significant human cost. Pig butchering and romance scams both involve an emotional investment on the part of the victim. Trust is earned and then abused, which can prompt feelings of shame and embarrassment alongside the real-world consequence of losing money.

And what of the perpetrators? There’s a growing body of evidence that many pig butchering operations are using victims of human trafficking to run their scams. During Proofpoint’s research, it engaged with several conversational attackers to investigate the kinds of tactics being used in the wild. In one such case, the attacker—a woman calling herself Andi—gradually revealed that she was of Chinese origin, and even used the message removal tool to send covert messages in Chinese, before reverting to English. During these exchanges, she hinted that she was physically located in Cambodia, where many pig butchering operations are believed to be based.

Of course, Proofpoint says it does not knot know how much of what Andi told them is true, and it’s certainly the case that eliciting sympathy is an essential component of many scams. But while these attacks continue to require significant human resources to operate, the groups responsible are likely to engage in trafficking and modern slavery.

Pig butchering with AI

With recent advances in generative AI, conversational scammers may not need human help much longer. The release of tools like ChatGPT, Bing Chat and Google Bard heralds the arrival of a new kind of chatbot, capable of understanding context, displaying reasoning, and even attempting persuasion. And looking further ahead, AI bots trained to understand complex tax codes and investment vehicles could be used to defraud even the most sophisticated victims.

Left, a common pig butchering image. Right, a similar—but unique—image generated for this post using Midjourney

Coupled with image generation models capable of creating unique photos of real-seeming people, conversational threat actors could soon be using AI as a full-stack criminal accomplice, creating all the assets they need to ensnare and defraud victims. And with advances in deepfake technology, which uses AI to synthesize both audio and video content, pig butchering could one day leap from messaging to calling, increasing the technique’s persuasiveness even more.

Share.

Comments are closed.

Visit Us On TwitterVisit Us On FacebookVisit Us On LinkedinVisit Us On Youtube