An old SEO scam has a new AI-generated face

Bad actors are using AI tools to lend legitimacy to link-building scams.

gerenme via Getty Images

Over the years, Engadget has been the target of a common SEO scam, wherein someone claims ownership of an image and demands a link back to a particular website. A lot of other websites would tell you the same thing, but now the scammers are making their fake DMCA takedown notices and threats of legal action look more legit with the help of easily accessible AI tools.

According to a report by 404Media, the publisher of the website Tedium received a "copyright infringement notice" via email from a law firm called Commonwealth Legal last week. Like older, similar attempts at duping the recipient, the sender said they're reaching out "in relation to an image" connected to their client. In this case, the sender demanded the addition of a "visible and clickable link" to a website called "tech4gods" underneath the photo that was allegedly stolen.

Since Tedium actually used a photo from a royalty-free provider, the publisher looked into the demand, found the law firm's website, and upon closer inspection, realized that the images of its lawyers were generated by AI. As 404Media notes, the images of the lawyers had vacant looks in the eyes that's commonly seen in photos created by AI tools. If you do a reverse image search on them, you'll get results from a website with the URL, which uses artificial intelligence to make "unique, worry-free model photos... from scratch." The publisher also found that the law firm's listed address that's supposed to be on the fourth floor of a building points to a one-floor structure on Google Street View. The owner of tech4gods said he had nothing to do with the scam but admitted that he used to buy backlinks for his website.

This is but one example of how bad actors can use AI tools to fool and scam people, and we have to be more vigilant as instances like this will just likely keep on growing. Reverse image search engines are your friend, but they may not be infallible and may not always help. Deepfakes, for instance, have become a big problem in recent years, as bad actors continue to use them to create convincing videos and audio not just to scam people, but also to spread misinformation online.