
Mandiant Threat Defense has published new research on an ongoing campaign by a Vietnam-nexus threat group that exploits public interest in AI tools. The group, tracked as UNC6032, tricks victims into clicking malicious social media ads that spoof popular AI video generator brands, like Luma AI, Canva Dream Lab, and others.
Clicking the ad redirects the user to a malicious site, mimicking AI tool functionality. Instead of delivering AI-generated content, it delivers infostealers and backdoors. The infostealers and backdoors allow the threat actors to quietly steal login credentials, credit card data, and other sensitive information. The gathered data is then likely sold in underground marketplaces. This continues to be a big problem for consumers and enterprises, as evidenced by stolen credentials accounting for the second-highest initial infection vector, according to Mandiant’s M-Trends 2025 report.
Mandiant Threat Defense identified thousands of these ads across social media platforms like Facebook and LinkedIn and anticipates similar campaigns are likely operating on other platforms.
Mandiant Threat Defense collaborated with Meta and LinkedIn on its findings to combat this campaign, and as noted in the research, “a significant portion of Meta’s detection and removal” of the identified malicious ads, domains, and accounts began in 2024 – prior to Mandiant alerting them of additional malicious activity.
However, since new ads are being created every day, ongoing collaboration across the industry will be vital to better safeguard everyday users.
“Threat actors are constantly evolving their TTPs,” said Mandiant Senior Manager and Report Co-Author Yash Gupta. “In this case, they have weaponised the popularity of AI tools, coupled with malicious ads to promote them.”
“A well-crafted website masquerading a legitimate AI tool can pose a threat to anyone, whether to an organisation or individuals,” he added. “Users should exercise caution when engaging with seemingly harmless ads and the websites they lead to.”