content moderation
Latest
X is suing California over social media content moderation law
X, the social media company previously known as Twitter, is suing the state of California over a law that requires companies to disclose details about their content moderation practices.
OpenAI is using GPT-4 to build an AI-powered content moderation system
ChatGPT maker OpenAI says it's using GPT-4 to build an AI-powered "content moderation system that is scalable, consistent and customizable." The company says humans should still be involved in the moderation process.
Sony didn’t want ‘Roblox’ on PlayStation due to child safety concerns
Sony blocked Roblox from PlayStation consoles because it was worried about inappropriate content reaching children. The revelation comes from a 2022 document, first reported by Axios, uncovered in the FTC’s Microsoft trial. However, Sony Interactive Entertainment President and CEO Jim Ryan said at the time that the company’s stance was softening, leaving the door open to an eventual PlayStation port for the viral user-generated platform.
Two Supreme Court cases could upend the rules of the internet
This week, the Supreme Court will hear two cases, Gonzalez v. Google and Twitter v. Taamneh, that give it an opportunity to drastically change the rules of speech online.
Twitter was targeted by a coordinated trolling campaign following Musk takeover
Since Friday, Twitter has been working to stop an “organized effort” by trolls to make people think the company had weakened its content guidelines.
Elon Musk’s Twitter takeover has already emboldened the trolls
It's been less than a day since Elon Musk took over Twitter, but researchers say they have already seen a rise in hate speech on the platform.
Trump’s free speech app Truth Social is censoring content and kicking off users
Nearly 500,000 people are on the app’s “waitlist”.
Facebook's program for VIPs allows politicians and celebs to break its rules, report says
Facebook has for years used a little known VIP program that’s enabled millions of high-profile users to skirt its rules, according to a new report in The Wall Street Journal.
Facebook is under new scrutiny for its moderation practices in Europe
Facebook is again facing questions about content moderators after a moderator told an Irish parliamentary committee the company isn't protecting reviewers.
Facebook's 'Supreme Court' is about to face its first big test
The Oversight Board, Facebook's 'Supreme Court,' is getting ready for the most consequential decision in its short existence.
TikTok forms an EU Safety Advisory Council following scrutiny from regulators
TikTok has formed a nine-member Safety Advisory Council in Europe to help shape its content moderation policies and practices
Pornhub removes all unverified videos from its platform
Last week, infamous porn-hosting site Pornhub made a big change by cutting off “unverified” uploads. Overnight, Pornhub has removed millions of uploaded videos — and, according to Vice, the site will start reviewing and verifying that those videos meet its “trust and safety policy.” This comes after a New York Times report last week highlighted how the site’s lax enforcement of its policies was leading to child exploitation.
Pornhub ends unverified uploads and bans downloads
Pornhub is ending uploads from unverified users and banning the ability of users’ to download much of the site’s content.
Facebook moderators say company is asking them to 'risk our lives'
Facebook's content moderators are demanding the company do more to protect them and their families from COVID-19.
DoJ asks Congress to limit protections for social media companies
The Department of Justice (DoJ) is asking Congress to adopt a new law that would make Facebook, Google and Twitter liable for the way they moderate content, The Washington Post reports. The legislation would alter the controversial Section 230 so that tech companies would be accountable when they “unlawfully censor speech and when they knowingly facilitate criminal activity online.” “For too long Section 230 has provided a shield for online platforms to operate with impunity,” Attorney General William Barr said in a statement.
Facebook is reportedly testing a ‘virality circuit breaker’ to stop misinformation
Facebook is reportedly piloting a new way to check viral posts for misinformation before they spread too far.
YouTube blames bug for censoring comments on China's ruling party
YouTube is blaming an error for the fact comments with two specific Chinese-language phrases are being deleted automatically.
Facebook will pay content moderators $52 million in PTSD settlement
Each of the 11,250 plaintiffs will receive at least $1,000.
TikTok names experts who will help shape its content policies
TikTok has named the group of experts who will help guide the app's content moderation policies as part of the newly formed "Content Advisory Council." The group, chaired by George Washington University Law Professor Dawn Nunziato, is made up of academics who are experts in issues like child safety, free speech, politics, and video forensics. The seven-member council (the company says it will eventually grow to "around a dozen" people) will start meeting with TikTok's US executives later this month to discuss "critical topics around platform integrity, including policies against misinformation and election interference."
Facebook bug marked legitimate coronavirus info as spam
Facebook was quick to say that it would fight coronavirus misinformation, but yesterday, one tool appeared to go haywire. Users reported that Facebook was marking posts with legitimate information and articles about the coronavirus as spam. According to The Verge, Facebook has resolved the issue and restored the posts that were incorrectly removed, but this is a serious glitch at a time when so many people are looking for accurate information on the coronavirus and COVID-19.