During the minutes and hours after shots rang out at YouTube's headquarters in San Bruno, many people used Twitter just as they have after other high-profile events: to spread fake information and hoaxes. In response to reports about how bad its "fake news" problem is (as a Buzzfeed reporter maintained a live thread collecting hoaxes, trolls started using an image of her in their fakes) Twitter published a post about "Serving the Public Conversation During Breaking Events."
It didn't mention hoaxes like the infamous "Sam Hyde" images by name, or the hacking of YouTube's Vadim Lavrusk, but more broadly outlined its policies and aims for moderating posts during this type of event.
During these types of situations, some of the ways we evaluate content include:
Is the content posted to harass or abuse another person, violating our rules on abusive behavior?
Is this meant to incite fear against a protected category as outlined in our hateful conduct policy?
Could misrepresenting someone in this way cause real-world harm to the person who is targeted per our rules on violent threats?
Is this account attempting to manipulate or disrupt the conversation and violating our rules against spam?
Can we detect if this account owner has been previously suspended? As outlined in our range of enforcement options, when someone is suspended from Twitter, the former account owner is not allowed to create new accounts.
Twitter maintains that beyond just banning accounts and removing posts, it used tools like Moments to highlight information people can trust, although that may not match the way many people get their information -- directly via reshares from people they follow.