ByteDance's TikTok has been facing a tough time over its handling of Israel-Hamas war content and related hate speech, but it is now stepping up with some new initiatives. On the moderation side, the social platform is rolling out new comment filtering tools, with the most notable one being "Comment Care Mode," which supposedly automatically filters comments that are deemed similar to the creator's previously reported or deleted ones. Another new feature helps nix comments made by accounts that are not in the creator's following or follower list. The company aims to increase new users' awareness of these tools via a prompt after their first video upload, and in the long run, it'll set up a product beta testing program to get direct feedback from creators.
TikTok has also set up a new anti-hate and discrimination task force, in the hopes of proactively spotting antisemitism, Islamophobia and other hate trends before they get out of hand. The team will work with experts on improving training for moderators to better address hate speech, and it will expand its managed creator communities to Jewish plus other inter-faith communities, as well as API and LGBTQ+, next year.
The Information added that TikTok plans to expand access to its research APIs to civil society groups — as the likes of the Anti-Defamation League have been requesting for years, apparently — so they can better understand the types of content spreading on TikTok. This comes in stark contrast to how X — well, Elon Musk, mostly — limited social media researchers' access to its platform, while it continues to deny any wrongdoing over accusations of antisemetic content.
While TikTok's stepped-up efforts may not convince those who still accuse its algorithm of bias, the platform has at least continued removing a staggering amount of offending content. The latest figure on removed videos in the conflict region has hit 1.3 million, between October 7 and November 30. These included "content promoting Hamas, hate speech, terrorism and misinformation."