Search

June 08 2022 T&S Newsletter



Early Warning | Policy & Regulations | Jobs & Careers | T&S FAQs

Subscribe to the newsletter here


T&S Early Warning News

Get ahead of new stories that are impacting the T&S industry.


Ethical Scaling for Content Moderation: Extreme Speech and the (In) Significance of Artificial Intelligence


Shorenstein Center | June 07, 2022

Company Listed: Social Media


“In the southern Indian state of Kerala, the right-wing group is a numerical minority. They are frequently attacked online by members of the communist political party. Should I then categorize this speech as exclusionary extreme speech since it is against a minority group?” asked a fact-checker from India, as we gathered at a virtual team meeting to discuss proper labels to categorize different forms of contentious speech that circulate online. For AI4 Dignity, a social intervention project that blends machine learning and ethnography to articulate responsible processes for online content moderation, it was still an early stage of labeling. Fact-checkers from Brazil, Germany, India and Kenya, who participated as community intermediaries in the project, were at that time busy slotting problematic passages they had gathered from social media into three different categories of extreme speech for machine learning.


We had identified these types as derogatory extreme speech (demeaning but does not warrant removal of content), exclusionary extreme speech (explicit and implicit exclusion of target groups that requires stricter moderation actions such as demoting) and dangerous speech (with imminent danger of physical violence warranting immediate removal of content). We had also drawn a list of target groups, which in its final version included ethnic minorities, immigrants, religious minorities, sexual minorities, women, racialized groups, historically oppressed caste groups, indigenous groups, large ethnic groups and any other. Under derogatory extreme speech, we also had groups beyond protected characteristics, such as politicians, legacy media, the state and civil society advocates for inclusive societies, as targets.

Read More


Twitter Must Tackle a Problem Far Bigger Than Bots


Bloomberg | June 08, 2022

Company Listed: Twitter


For years, anyone covering China as a journalist, researcher or public policy maker has had to deal with the issue of trolls, fake accounts, copycats and harassment on social media like Facebook and Twitter. Recently it’s been getting worse, and for a growing number of female writers of Asian descent it has become particularly aggressive and malicious.


New research released last week connected the dots for what most active Twitter writers in the region already knew: There’s an ongoing, sophisticated and coordinated campaign being waged by Chinese Communist Party-linked operatives against a core group of women who cover China.

Read More


Meet the doctor fighting fake news about monkeypox on TikTok


Euro News | June 07, 2022

Company Listed: TikTok


As monkeypox cases rise across the world, so is the online misinformation about the virus.


The World Health Organization (WHO) now says there are 780 new cases of the disease in non-endemic countries, including Portugal, Spain, and the United Kingdom.


And unfounded claims about monkeypox have also been spreading online, related to the origins of the virus and who it infects.

Read More


New UK centre will help fight information war


BBC | June 07, 2022

Company Listed: Google