Advertisement

NYU report lists likely social media disinformation tactics for 2020

Expect new tactics by bots, deepfake videos and more fake news on Instagram.

The 2020 US presidential election will serve as the ultimate test for social media platforms like Facebook and Twitter to prove they can combat fake news. But could they be fighting the last war? A report released by NYU's Stern Center for Business and Human Rights argues that relatively new tactics like domestic fake news operations, phony memes on Instagram and deepfake videos will play a bigger role in the next election.

The report largely rehashes how fake news has evolved in the aftermath of 2016, to something as likely to spread on Instagram or Whatsapp as on Facebook, or as likely to come from domestic actors as from Russia. "Disinformation poses a major threat to the U.S. presidential election in 2020, with the potential to swing the result in a close race through new and updated tactics," said Paul M. Barrett, deputy director of the NYU Stern Center for Business and Human Rights and the report's author.

The study predicts that Instagram in 2020 will become what Facebook was in 2016; the vehicle of choice for fake news. As evidence it points to a 2018 report from the Senate Intelligence Committee, which found that the Russian Internet Research Agency received more engagement on the popular photo sharing app than on Facebook. The platform has taken steps this year to cut back on misinformation, such as blocking anti-vax content and allowing users to flag false content. Still, the photo-oriented Instagram has largely escaped the scrutiny that platforms which disseminate news articles, such as Facebook and Twitter, have faced from the public.

In an interview with Engadget, Barrett said that he felt that disinformation was becoming more of an image game, than a text game. Fake news on Instagram can travel long distances in the form of memes, as evidenced by a viral hoax about a policy change that was shared by multiple celebrities.

The report points out other potential threats that have yet to surface, such as deepfake videos. While a doctored video featuring Nancy Pelosi gained a fair amount of traction this year, the technology has yet to become widespread. Since last fall, Facebook has used a filter to detect altered photos and videos.

Interestingly enough, as platforms get more skilled at taking down fake news, bot accounts are figuring out other ways to survive. There's been an increase of bot accounts amplifying old news or divisive real news, according to a Symantec researcher quoted in the report. The threat intelligence firm Record Firm coined the term "fishwrapping" to describe when social media trolls recycle old breaking news on terrorist attacks to create the impression that they're more frequent or recent than they actually are.

In the months leading up to the election, the report says that platforms should be on the lookout for more fake news originating from domestic actors. The New York Times reported that Americans have been found imitating Russian fake news tactics, creating fake networks of Facebook pages and accounts. Something else to look out for are fake news efforts from other countries. Iran conducted its own fake news operation against Americans this year, and China disseminated propaganda about protests in Hong Kong.

Barrett said what surprised him the most about the report was the prospect that "we could have foreign disinformation coming at us from three sources (Russia, Iran, China), at the same time that an even greater volume of disinformation will come from right here at home." Meanwhile, it seems like Big Tech's understanding of what fake news is -- and how to meaningfully combat it -- is still in its early stages.