Document details

Platform Problems and Regulatory Solutions: Findings from a Comprehensive Review of Existing Studies and Investigations

Paris: UNESCO;Research ICT Africa (2023), 17 pp.

Series: World Trends in Freedom of Expression and Media Development

CC BY-SA

"The proliferation of hate speech and disinformation on online platforms has serious implications for human rights, trust and safety as per international human rights law and standards. The mutually-reinforcing determinants of the problems are: ‘attention economics’; automated advertising systems; external manipulators; company spending priorities; stakeholder knowledge deficits; and flaws in platforms’ policies and in their implementation. How platforms understand and identify harms is insufficiently mapped to human rights standards, and there is a gap in how generic policy elements should deal with local cases, different rights and business models when there are tensions. Enforcement by platforms of their own terms of service to date has grave shortfalls, while attempts to improve outcomes by automating moderation have their limitations. Inequalities in policy and practice abound in relation to different categories of people, countries and languages, while technology advances are raising even more challenges. Problems of ‘solo-regulation’ by individual platforms in content curation and moderation are paralleled by harms associated with unilateral state regulation. Many countries have laws governing content online, but their vagueness fuels arbitrary measures by both authorities and platforms. Hybrid regulatory arrangements can help by elaborating transparency requirements, and setting standards for mandatory human rights impact assessments." (Key messages)
1 Why online disinformation and hate speech matters, 2
2 Unpacking the determinants of online hate speech and disinformation, 4
3 Flaws in platform content policies and implementation, 7
4 Possible regulatory solutions to address concerns with the platforms, 10
5 Recommendations, 17