extremism

Latest

  • Getty Images

    Twitter suspends more accounts for 'coordinated manipulation'

    by 
    Rachel England
    Rachel England
    08.28.2018

    Twitter's long been under fire for its approach to bots and extremist accounts, but now it appears to be taking a more proactive stance towards its community guidelines. Last week it suspended 284 accounts for engaging in what it called "coordinated manipulation," now it's gotten rid of a further 486.

  • Jaap Arriens/NurPhoto via Getty Images

    EU draft law would force sites to remove extremist content

    by 
    Jon Fingas
    Jon Fingas
    08.20.2018

    The European Union is no longer convinced that self-policing is enough to purge online extremist content. The Financial Times has learned that the EU is drafting legislation to force Facebook, Twitter, YouTube and other internet companies to delete material when law enforcement flags it as terrorist-related. While EU security commissioner Julian King didn't provide details of how the measure would work, a source said it would "likely" mandate removing that content within an hour of receiving notice, turning the existing voluntary guidelines into an absolute requirement.

  • Guillaume Payen/SOPA Images/LightRocket via Getty Images

    Twitter bans far-right group Proud Boys ahead of Washington rally

    by 
    Jon Fingas
    Jon Fingas
    08.11.2018

    Just because Twitter is reluctant to take action against some of its more malicious users doesn't mean it isn't cracking down against others. Twitter has confirmed to BuzzFeed News that it banned the accounts of the far-right group Proud Boys for reportedly breaking its rules prohibiting "violent extremist groups." The social network shut down the group's main account, its satellite accounts and that of its founder Gavin McInnes. While the company didn't specify what prompted the move, it came just after a violent August 4th protest in Portland, Oregon, and just ahead of the extreme right-wing Washington, DC rally on August 12th.

  • Dominic Lipinski/PA Images via Getty Images

    Facebook's friend suggestions helped connect extremists

    by 
    Jon Fingas
    Jon Fingas
    05.06.2018

    When you think of internet giants fighting terrorism online, there's a good chance you think of them banning accounts and deleting posts. However, their friend suggestions may prove to be just as problematic. Researchers have shared a report to the Telegraph revealing that Facebook's "suggested friends" feature has been connecting Islamic extremists on a routine basis. While some instances are almost expected (contacting one extremist brings up connections to others), some of the suggestions surface purely by accident: reading an article about an extremist uprising in the Philippines led to recommendations for "dozens" of local extremists.

  • Reuters/Jonathan Ernst

    Facebook and Google will testify to Senate over terrorist content

    by 
    Jon Fingas
    Jon Fingas
    01.10.2018

    It's not just European countries who aren't satisfied with internet giants' ability to curb online terrorist content. The US Senate has summoned Facebook, Google (or rather, Alphabet) and Twitter to testify at a January 17th Commerce Committee hearing that will "examine the steps" social networks have been taking to fight the spread of online extremist material. All three have agreed to testify and will send their policy leaders. We've asked them if they can comment on the upcoming testimony and will let you know if they can hint at what they'll say.

  • Dominic Lipinski/PA Images via Getty Images

    UK may tax internet giants to get more help fighting online extremism

    by 
    Jon Fingas
    Jon Fingas
    12.31.2017

    The UK still isn't convinced that internet giants are doing enough to curb online extremism, and it's now considering hitting those companies where it really hurts: their bank accounts. In an interview with the Sunday Times, security minister Ben Wallace said the country should use taxes to either incentivize stronger anti-extremist efforts or compensate for "inaction." While Wallace didn't go into detail as to what he'd like, the Times suggested it would be a windfall-based tax that targeted companies' large profits.

  • Reuters/Wolfgang Rattay

    Twitter bans extremist account retweeted by Trump

    by 
    Jon Fingas
    Jon Fingas
    12.18.2017

    Twitter's enforcement of its new anti-hate rules is having a very immediate and tangible effect. Daily Dot has noticed that Twitter banned the account of Jayda Fransen, the British extremist whose bogus anti-Muslim videos were retweeted by Donald Trump in November. The social network also banned the account of her right-wing group, Britain First, as well as those of numerous other racist organizations, such as American Renaissance and its editor Jared Taylor.

  • Reuters/Dado Ruvic

    YouTube bans all videos from an extremist cleric

    by 
    Jon Fingas
    Jon Fingas
    11.12.2017

    YouTube's efforts to catch and take down terrorist videos include some far-reaching measures. The New York Times has learned that YouTube recently removed and blocked all videos from Anwar al-Awlaki, a cleric who turned extremist and was killed by an American drone strike in 2011. While it's only his later clips that technically run afoul of YouTube guidelines, the streaming giant determined that all of them ultimately had to go. Supporters of his terrorist cause have reposted his moderate material in a show of support -- getting rid of everything theoretically prevents these adherents from finding something to rally around.

  • 35007

    EU tells tech companies to curb hate speech or face consequences

    by 
    Rachel England
    Rachel England
    09.28.2017

    The European Union (EU) has proposed a raft of new measures to tackle online hate speech, telling social media companies that they can expect legal consequences if they don't get rid of illegal content on their platforms. Despite companies such as Facebook, Twitter and Google pledging to do more to fight racist and violent posts, the European Commission says they're not acting fast enough, and that it's prepared to initiate a rigorous framework to hold them to account.

  • Sergei Konkov via Getty Images

    YouTube begins isolating offensive videos this week

    by 
    Mallory Locklear
    Mallory Locklear
    08.25.2017

    In June, Google announced that it would begin isolating YouTube videos that weren't directly in violation of its standards but contained "controversial religious or supremacist content." And starting this week, those efforts will begin to take effect.

  • Sergei Konkov/TASS

    YouTube will isolate offensive videos that don't violate policies

    by 
    Mallory Locklear
    Mallory Locklear
    08.01.2017

    YouTube has been working on ways to manage offensive and extremist content that do and do not violate its policies, and some steps it has taken include AI-assisted video detection and removal as well as input from more experts. Today, in a blog post, the company provided more detail about its ongoing efforts.

  • Dado Ruvic / Reuters

    Facebook and Twitter hold anti-extremism alliance summit

    by 
    Saqib Shah
    Saqib Shah
    08.01.2017

    The quartet of web giants that make up the Global Internet Forum to Counter Terrorism (GIFCT) are holding their inaugural meeting today. Formed in June by Twitter, Facebook, Microsoft and Google, the initiative aims to leverage technology -- such as the shared industry hash database and machine vision-based detection -- to stamp out extremist imagery online. In attendance will be UK Home Secretary Amber Rudd and the US Acting Secretary of Homeland Security Elaine Duke, along with EU and UN representatives.

  • Anatoliy Babiy via Getty Images

    YouTube will redirect you away from extremist videos

    by 
    Nathan Ingraham
    Nathan Ingraham
    07.20.2017

    Last month, Google steps it would take to help stamp out extremism and terrorism-related content online. Today, the company is announcing a new initiative on YouTube to help guide people away from terrorism propaganda videos and steer them towards content that debunks extremist messaging and mythology. It's appropriately called the Redirect Method, because it essentially redirects users searching for specific keywords on YouTube to playlists featuring videos that counter extremist content.

  • Michael Short/Bloomberg via Getty Images

    Google bets AI and human oversight will curb online extremism

    by 
    Jon Fingas
    Jon Fingas
    06.18.2017

    Google is under a lot of pressure to stamp out extremists' online presences, and it's responding to that heat today. The internet giant has outlined four steps it's taking to flag and remove pro-terrorism content on its pages, particularly on YouTube. Technological improvements play a role, of course, but the company is also counting on a human element that will catch what its automated filters can't.

  • Thierry Chesnot/Getty Images

    France and the UK pressure internet companies to fight extremism

    by 
    Jon Fingas
    Jon Fingas
    06.13.2017

    The British and French have already made separate efforts to limit extremists' online presences, but they now believe they can accomplish more by working together. The two nations have unveiled a joint campaign to prevent extremists from using the internet as a "safe space." They're vowing to pressure tech firms into doing more (such as better automatic removal tools), and are "exploring the possibility" of fines and other legal penalties if those companies don't pull offending material.

  • Lucy Nicholson / Reuters

    Google vows to pull ads from extreme videos and sites

    by 
    Nick Summers
    Nick Summers
    03.21.2017

    Google has detailed new safeguards to ensure brands don't have their adverts served against extremist content. The measures follow a wave of complaints and advertising withdrawals by the UK government, Audi and L'Oreal, among others, triggered by a Times investigation which revealed a number of adverts being shown alongside harmful and inappropriate videos on YouTube. In a blog post, Google said it would be taking "a tougher stance" and "removing ads more effectively" from content that is attacking people based on their race, religion or gender. It also promised to hire "significant numbers" of new staff to review "questionable content."

  • Shutterstock

    UK government pulls YouTube ads over hate speech concerns (update)

    by 
    Jamie Rigg
    Jamie Rigg
    03.17.2017

    The UK government has pulled adverts from YouTube after a report from The Times found they were running alongside extremist content. Ads for campaigns such as promoting blood donation and Army recruitment have been restricted after the apparent failings of Google's ad platform, which is supposed to work within guidelines set by the advertising party. Other UK brands, including Channel 4 and The Guardian, have also pressed the pause button on advertising with Google after learning their names were appearing alongside content from the likes of hate preachers and rape apologists.

  • Ahmad Al-Rubaye/AFP/Getty Images

    Hacker faces 20 years in prison for helping ISIS

    by 
    Jon Fingas
    Jon Fingas
    09.25.2016

    The US just broke new ground in its bid to fight pro-terrorist hackers. A judge has sentenced Kosovo citizen Ardit Ferizi to 20 years in prison for hacking a US company in order to collect information about 1,300 government and military personnel and help ISIS create a hit list. It's the country's first conviction for terrorism-related hacking, according to Assistant Attorney General John Carlin. Ferizi pleaded guilty on June 15th, roughly 8 months after Malaysian police arrested him on the US' behalf.

  • Dado Ruvic / REUTERS

    Twitter suspends 235,000 accounts for promoting terrorism

    by 
    David Lumb
    David Lumb
    08.18.2016

    For years, Twitter was the free speech bastion it aspired to be, maintaining a hands-off approach enabling a liberated forum yet allowing hate speech to flow across its channels. Amid criticism and pressure to stop extremist groups from using its services for coordination and recruitment, the social media company began cracking down in mid-2015. Today they announced that over the past six months, the company has suspended 235,000 accounts suspected of promoting terrorism, bringing the total to 360,000 since last year.

  • Adrian Dennis/AFP/Getty Images

    Twitter and YouTube wouldn't delete an extremist cleric's posts (update: gone)

    by 
    Jon Fingas
    Jon Fingas
    08.17.2016

    Internet giants have been increasingly willing to take down extremist content, but their previous reluctance is coming back to haunt them. The UK recently convicted radical cleric Anjem Choudary (and co-defendant Mohammed Rahman) of rallying support for ISIS, and court documents have revealed that neither Twitter nor YouTube agreed to take down key content. Twitter hasn't deleted his account, for example, despite British law enforcement's claims that it violates Twitter policies on promoting terrorism -- even after he was arrested in September 2014. It pulled Rahman's, but not in sync with an official request.