"While social media companies dress their content moderation policies in the language of human rights, their actions are largely driven by business priorities, the threat of government regulation, and outside pressure from the public and the mainstream media. This report demonstrates the impact of content moderation by analyzing the policies and practices of three platforms: Facebook, YouTube, and Twitter. Our evaluation compares platform policies regarding terrorist content (which often constrict Muslims' speech) to those on hate speech and harassment (which can affect the speech of powerful constituencies), along with publicly available information about enforcement of those policies." (Introduction, p.3)
Contents
Introduction, 3
1 Content Moderation Policies: Discretion Enabling Inequity, 4
Terrorism and Violent Extremism Policies -- Hate Speech -- Harassment
2 Enforcement of Content Moderation Policies: Who Is Affected and How? 10
Latent Biases in How Offending Content Is Identified and Assessed -- Intermediate Enforcement and Public Interest Exceptions - Protections for the Powerful
3 Appeals Processes and Transparency Reports: A Start, Not a Solution, 18
User Appeals -- Transparency Reports
4 Recommendations, 20
Legislative Recommendations -- Platform Recommendations
Conclusion, 27