extremism
Latest
Google is making free anti-terrorism moderation tools for smaller websites
Google is helping smaller websites fight terrorist content by making a free moderation tool.
Twitter mistakenly suspended users after extremists abused its private image policy
Twitter said it suspendedsome users by mistake after far-right extremists abused its new private media policy to target journalists and researchers.
Homeland Security may use companies to find extremism on social media
Homeland Security is considering using private companies to help it find extremist threats on social media.
Facebook test warns users who may have seen 'harmful extremist content'
Facebook is testing new prompts to reach users who may be “becoming an extremist" or who may have seen "harmful extremist content."
Feds charge man with planning to blow up an Amazon data center
The FBI has charged a Texas man with planning to blow up an Amazon data center in Virginia and help 'kill' most of the internet.
House Democrats ask YouTube to explain extremism policies
House Democrats on the Energy and Commerce Committee are again pushing YouTube to explain its policies around extremist content.
Airbnb has been quietly using social media to root out and ban extremists
It's said to have banned more than 100 accounts with ties to hate groups.
Facebook temporarily bans ads for gun accessories and military gear
Facebook has paused all ads for gun accessories and military gear in the US through President-elect Biden's inauguration.
Facebook reportedly hesitated to remove Indian extremists over risk to staff
Facebook reportedly 'balked' at banning an extremist group in India after its security team warned of possible retaliation against staff.
Study says YouTube 'actively discourages' radicalism
Politicians and others complain that YouTube fosters extremism, but how caustic is it, really? Not all that much, according to researchers. Data scientist Mark Ledwich and UC Berkley researcher Anna Zaitsev have published a study suggesting that YouTube "actively discourages" radicalism through its recommendation system. Their reviewers classified over 760 politics-oriented channels based on overall leaning, topics and proximity to the mainstream, and found that YouTube removed "almost all" suggestions for conspiracy theorists, white identitarians and "provocateurs" (read: purposefully offensive creators). For the most part, there's only a significant likelihood of being matched with questionable content if you're already watching that material.
Congress plans to investigate how social media giants are fighting hate
House lawmakers plan to unveil legislation to study the ways social media can be weaponized, The Washington Post reports. They want to better understand social media-fueled violence and to determine if tech giants are doing enough to effectively protect users from harmful content. Congress isn't just looking at what tech giants say they'll do to fight online hate and extremism. Lawmakers want to know if those efforts are effective or not.
Australia will block domains with extremist material during terror attacks
Australia's quest to fight online extremism will soon involve temporary but far-reaching bans. Prime Minister Scott Morrison has announced that the country will block internet domains hosting extremist material in the middle of terrorist attacks and other crises, such as the anti-Muslim shooting in Christchurch, New Zealand this past March. The government also plans to block domains hosting "abhorrent" material created by the perpetrators, such as murder and sexual assault.
White House invites tech companies to discuss violent online extremism
The White House plans to host a meeting with tech companies to discuss the rise of violent online extremism. According to The Washington Post, this is the Trump administration's first major engagement on the issue after the recent mass shooting in Texas left 22 people dead. Trump is scheduled to be at fundraisers in the Hamptons, so he may not attend.
House committee asks 8chan owner to testify over extremist content
Politicians are still determined to investigate 8chan's role in fueling extremism even though the site is effectively out of commission. The House's Homeland Security Chairman Bennie Thompson and Ranking Member Mike Rogers have sent a letter to 8chan owner Jim Watkins asking him to testify about the site's efforts to "investigate and mitigate" the appearance of extremist content, including white supremacist material. The politicians were concerned that 8chan has been linked to three mass shootings in 2019 (Christchurch, Poway and El Paso), with the attackers reportedly posting letters or manifestos on the site shortly before committing the murders.
Anti-Brexit RPG ‘Not Tonight’ is coming to Switch
The anti-Brexit game Not Tonight may be a year old, but it's still just as relevant. With the Brexit deadline pushed back to October and heightened political tension in the US, the game's digs at the gig economy, right-wing extremism and nationalism are timely. Soon, you'll be able to take all of that on the go. No More Robots and developer PanicBarn are bringing the RPG game to Nintendo Switch with new content.
Canada reveals measures to tackle online extremism
Canada has announced several measures to combat online extremism. Public Safety Canada said the government will provide up to $1 million CAD ($762,000) to a program called Tech Against Terrorism. The funding will help set up a system to inform smaller companies when terrorist content pops up to help them remove it faster. The agency said that will "help to achieve the commitment under the Christchurch Call to Action to support small platforms as they build capacity to remove terrorist and violent extremist content."
Christchurch shooting videos are still on Facebook over a month later
Current methods for filtering out terrorist content are still quite limited, and a recent discovery makes that all too clear. Motherboard and the Global Intellectual Property Enforcement Center's Eric Feinberg have discovered that variants of the Christchurch mass shooter's video were available on Facebook 36 days after the incident despite Facebook's efforts to wipe them from the social network. Some of them were trimmed to roughly a minute, but they were all open to the public -- you just had to click a "violent or graphic content" confirmation to see them. Others appeared to dodge filtering attempts by using screen captures instead of the raw video.
EU law could fine sites for not removing terrorist content within an hour
The European Union has been clear on its stance that terrorist content is most harmful in the first hour it appears online. Yesterday, the European Parliament voted in favor of a new rule that could require internet companies to remove terrorist content within one hour after receiving an order from authorities. Companies that repeatedly fail to abide by the law could be fined up to four percent of their global revenue.
Australian bill could imprison social network execs over violent content
Australia may take a stricter approach to violent online material than Europe in light of the mass shooting in Christchurch, New Zealand. The government is introducing legislation that would punish social networks that don't "expeditiously" remove "abhorrent" violent content produced by perpetrators, such as terrorism, kidnapping and rape. If found guilty, a company could not only face fines up to 10 percent of their annual turnover, but see its executives imprisoned for up to three years. The country's Safety Commissioner would have the power to issue formal notices, giving companies a deadline to remove offending material.
YouTube removed 58 million videos last quarter for violating policies
YouTube has been publishing quarterly reports that detail how many videos it removes for policy violations and in its most recent report, YouTube has also included additional data regarding channel and comment removals. Between July and September, the company took down 7.8 million videos, nearly 1.7 million channels and over 224 million comments, and YouTube noted that machine learning continues to play a major role in that effort. "We've always used a mix of human reviewers and technology to address violative content on our platform, and in 2017 we started applying more advanced machine learning technology to flag content for review by our teams," the company said. "This combination of smart detection technology and highly-trained human reviewers has enabled us to consistently enforce our policies with increasing speed."