The social network is hiring 1,000 more people for its global ads review teams in the space of the next year, and is "investing more" in machine learning to help with automated flagging for ads. Advertisers will need "more thorough" documentation if they're running ads related to US federal elections, such as confirming the organization they work with. Facebook is also tightening its policies to prevent ads promoting "more subtle expressions of violence," which might include some of the ads stoking social tensions.
The site is aware that it isn't alone in grappling with Russia-backed campaigns, for that matter. It's "reaching out" to government and industry leaders to both share info and help establish better standards so that this won't happen elsewhere.
Facebook's moves look like they could catch dodgy ad campaigns, particularly those attempting election influence campaigns. However, this is part of an all too familiar pattern at Facebook: the company implements broad changes (usually including staff hires) after failing to anticipate the social consequences of a feature. While it would be difficult for a tech company to anticipate every possible misuse of its services, this suggests that Facebook needs to extensively consider the pitfalls of a feature before it reaches the public, rather than waiting for a crisis.