As the buzz around ChatGPT and other generative AI increases, so has scammers’ interest in the tech. In a new report published by Meta, the company says it’s seen a sharp uptick in malware disguised as ChatGPT and similar AI software.
In a statement, the company said that since March of 2023 alone, its researchers have discovered “ten malware families using ChatGPT and other similar themes to compromise accounts across the internet” and that it’s blocked more than 1,000 malicious links from its platform. According to Meta, the scams often involve mobile apps or browser extensions posing as ChatGPT tools. And while in some cases the tools do offer some ChatGPT functionality, their real purpose is to steal their users’ account credentials.
In a call with reporters, Meta Chief Security Officer Guy Rosen said the scammers behind these exploits are taking advantage of the surge in interest in Generative AI. “As an industry we've seen this across other topics that are popular in their time such as crypto scams fueled by the immense interest in digital currency,” Rosen said. “So from a bad actor’s perspective, ChatGPT is the new crypto.”
Meta noted that people who manage businesses on Facebook or who otherwise use the platform for work have been particular targets. Scammers will often go after users’ personal accounts in order to gain access to a connected business page or advertising account, which are more likely to have a linked credit card.
To combat this, Meta said it plans to introduce a new type of account for businesses called “Meta Work” accounts. These accounts will enable users to access Facebook’s Business Manager tools without a personal Facebook account. “This will help keep business accounts more secure in cases when attackers begin with a personal account compromise,” the company said in a statement. Meta said it will start a “limited” test of the new work accounts this year and will expand it “over time.”
Additionally, Meta is rolling out a new tool that will help businesses detect and remove malware. The tool “guides people step-by-step through how to identify and remove malware, including using third-party antivirus tools” to help prevent businesses from repeatedly losing access to accounts.
Meta’s researchers aren’t the first to warn about fake ChatGPT tools leading to hacked accounts. Recently, researchers warned about a Chrome extension posing as ChatGPT software that led to the hacking of a number of Facebook accounts. The exploit, reported by Bleeping Computer, became known as the “Lily Collins” hack because the names on victims’ accounts were changed to “Lilly Collins.”
During a call with reporters, Meta’s Head of Security Policy, Nathaniel Gliecher, said these attacks also often target people connected to businesses. “What they'll want to do is to close that personal account to burn their access and prevent the legitimate user from getting back in,” he said. “One of the tactics we're now seeing is where they will take the personal account and rename it to have the name of a prominent celebrity in hopes that that gets the account taken down.” He added that the new Work Accounts would help prevent similar hacks in the future.