Beginning today, Instagram users can report content that they believe to be false. Instagram will use those flags to better understand misinformation on the platform and to train its AI to spot false content. In time, Instagram will use the feedback, as well as other "signals" -- like how old a post is and the account's previous behavior -- to determine if a post needs to be reviewed by third-party fact checkers. This is slightly different than the pilot program Instagram launched in May, which allows users to flag false content for review by fact checkers. For now, that will remain a pilot.
To flag false content, users will click the three-dot menu at the top right corner of an Instagram post, select "it's inappropriate" and choose "false information." If a post is indeed incorrect, it won't be deleted, but it will be "downplayed" on the Explore tab and hashtag pages. The post creator won't be notified when their content is under review, and they won't know whether the fact-checker decides it's false or not.
The Instagram posts will be reviewed by the same third-party fact-checkers that review flagged Facebook content. Facebook knows it has a fake news problem, and it's been using third-party fact checkers for years. One of those companies, Full Fact, recently spoke out, saying Facebook's fact checking algorithms need work. Facebook isn't always quick to fight fake news -- it took three years to address an issue in Moldova -- and it's had to defend its decision not to remove fake news from politicians. Letting users flag false Instagram posts may not change much in the short-term, but it could help Facebook build stronger detection tools.