"Legislation targeting “fake news” — a contested term used to reference both news and news providers that governments (or others) reject as well as disinformation campaigns — has increased significantly over the last few years, particularly in the wake of COVID-19. This study finds that even
...
when technically aimed at curbing disinformation, the majority of “fake news” laws, either passed or actively considered from 2020 to 2023, lessen the protection of an independent press and risk the public’s open access to a plurality of fact-based news. Indeed, governments can — and have — used this type of legislation to label independent journalism as “fake news” or disinformation. According to the Committee to Protect Journalists, among the 363 reporters jailed around the world in 2022, 39 were imprisoned for “fake news” or disinformation policy violations. Even within well-intended legislative policies, like Germany’s laws which focus on platform moderation of “illegal content” related to hate speech and Holocaust denial, concerns can arise over potential government censorship." (Page 1)
more
"This volume addresses the concept of “(in)nocent lies” in the media – beyond the concept of misleading information online, this extends to a deliberate effort to spread misinformation, disinformation and conspiracy theories – and proposes a critical approach to tackle the issue in related i
...
nterdisciplinary fields. The book takes a multidisciplinary and international approach, addressing the digital divide and global inequality, as well as algorithmic bias, how misinformation harms vulnerable groups, social lynching and the effect of misinformation on certain social, political and cultural agendas, among other topics. Arranged thematically, the chapters paint a nuanced and original picture of this issue." (Publisher description)
more
"Governments have updated penal codes and national security laws, enacted fake news and cybersecurity laws as well as laws that govern internet service providers and technology companies. These laws have widely been used to block and remove online content that call out blind spots in government poli
...
cies and to intimidate and prosecute these content creators through hefty fines and jail time. Efforts to hold political office holders and government officials accountable for their policies are increasingly penalised. These government actions have significantly impacted civil society actors in numerous ways. First, individuals and organisations utilising the online sphere to hold government officials and policies accountable have come under intense scrutiny, resulting in the criminalisation of critics and the blocking and removal of online content deemed sensitive by state authorities. Second, the effectiveness of civil society in holding governments accountable is compromised, as state authorities routinely direct internet service providers and technology companies to block or remove online content considered sensitive or illegal. Consequently, individuals and organisations increasingly find their digital content at risk of being blocked or removed, succumbing to government directives to internet service providers and technology companies. This diminishes civil society’s calls for accountability. Third, on several instances, governments have imposed internet shutdowns - particularly during elections and politically sensitive periods - to disrupt the information flow. Ultimately, this has limited civil society’s ability to send and receive communications effectively to mobilise people to hold governments publicly accountable during politically important instances. Fourth, troll ng has surfaced as a mainstream strategy to harass and intimidate individuals and organisations who seek to hold governments accountable. Typically orchestrated by organised groups or cybertroopers, these digital attacks increasingly involve online hate speech directed at women who call out blindspots in government policies. Fifth, the ways of working of INGOs and CSOs have changed, leading many organisations to restrict the scope and assertiveness of their communications to shield themselves from government retribution and trolling. Some entities have opted to remove the visibility of their organisations, incorporating measures such as disallowing the use of their logos or the publishing of videos, photos and text by local partners in order to distance themselves from particular activities and contents of knowledge products. Given these developments, the principal recommendation is that key stakeholders, including international organisations, governments, ISPs and technology companies, and civil society actors, should recognise that criticism of government policies and officials is a legitimate activity and a vital form of expression for civil society. Hence, any measures, whether legal or non-legal, that interfere with or criminalise this legitimate activity should be rescinded or disallowed. Instead, measures should be put in place to ensure that civil society is empowered to call out the blind spots in government policies." (Executive summary)
more
"The three countries [Bosnia and Herzegovina, Indonesia, and Kenya] provide evidence of online hate speech and disinformation affecting human rights offline. The evidence is not comprehensive yet clear enough to raise serious concerns. Online gender-based violence is also reported as critical in the
...
three countries. In the three countries, national legislation to address harmful content shows some degree of inconsistency in comparison to international standards, notably in relation to the protection of freedom of expression. The reasons for such inconsistency vary among countries. The effective enforcement of legal frameworks is uneven in all three countries. Social and cultural inequalities are often reproduced in government or judicial decisions, and vagueness in legislation opens space for discretionary decisions. Platform companies have offices in Indonesia and Kenya, but not in Bosnia and Herzegovina. In the three countries, there is a lack of transparency in how companies allocate the roles of moderation tasks, including the number of different language moderators and their trusted partners and sources. Companies do not process content moderation in some of the main local languages and community standards are not entirely or promptly available in local languages." (Executive summary)
more
"The proliferation of hate speech and disinformation on online platforms has serious implications for human rights, trust and safety as per international human rights law and standards. The mutually-reinforcing determinants of the problems are: ‘attention economics’; automated advertising system
...
s; external manipulators; company spending priorities; stakeholder knowledge deficits; and flaws in platforms’ policies and in their implementation. How platforms understand and identify harms is insufficiently mapped to human rights standards, and there is a gap in how generic policy elements should deal with local cases, different rights and business models when there are tensions. Enforcement by platforms of their own terms of service to date has grave shortfalls, while attempts to improve outcomes by automating moderation have their limitations. Inequalities in policy and practice abound in relation to different categories of people, countries and languages, while technology advances are raising even more challenges. Problems of ‘solo-regulation’ by individual platforms in content curation and moderation are paralleled by harms associated with unilateral state regulation. Many countries have laws governing content online, but their vagueness fuels arbitrary measures by both authorities and platforms. Hybrid regulatory arrangements can help by elaborating transparency requirements, and setting standards for mandatory human rights impact assessments." (Key messages)
more
"In 2023, Advancing Rights in Southern Africa (ARISA) through its consortium partner, Internews, undertook the most comprehensive review yet of laws affecting media practice and the freedom of expression, including cyber laws, penal codes, constitutions and acts of parliament, in the sixteen Souther
...
n African Development Community (SADC) countries. The Information Ecosystem Analysis (IEA) provides an in-depth overview of the legal provisions that have been enacted or are in various stages of becoming laws in the region, and are being used by SADC governments to stifle and limit press freedom and public debate. Each of the sixteen SADC countries are included as individual country chapters in this report, providing country-specific legal analyses of the relevant Cyber security and related laws used by the respective country’s governments to stifle freedom of expression. The approach used by the researchers considered the legislative environment together with literature on the relevant topics, court cases and media reports about the application of specific laws and focused on incidents of where laws were used, dating from 2020 to present. The respective country analyses have been informed by extensive virtual interviews conducted with journalists, civil society representatives and academics in the region. Attention was also given to countries holding elections in 2023 and 2024." (Executive summary)
more
"This article analyses the Brazilian PL 2630, so-called “fake news bill,” according to platform regulation approaches focused on speech, data, and market power. This law project was introduced in 2020 with the objective to fight disinformation campaigns in digital platforms such as social media
...
and messaging services. After a multistakeholder debate, the latest version of the bill before the 2022 general elections was presented in the Chamber of Deputies. This article argues that the bill takes different stances with regard to those three basic elements. The bill strongly draws on the dimension of speech, establishing requirements for transparency in content moderation following the highest international standards. On data and market power, however, the bill makes no significant progress, with little contribution, for example, to tackling the surveillance-based business model. This way, it does not touch on structural conditions that shape disinformation campaigns, such as the profit motive of digital platforms. It follows a general pattern of platform regulation, leaving structural features untouched and, this way, eventually undermining stronger efforts against online disinformation." (Abstract)
more
"The spread of disinformation in recent years has caused the international community concerns, particularly around its impact on electoral and public health outcomes. When one considers how disinformation can be contained, one often looks to new laws imposing more accountability on prominent social
...
media platforms. While this narrative may be consistent with the fact that the problem of disinformation is exacerbated on social media platforms, it obscures the fact that individual users hold more power than is acknowledged and that shaping user norms should be accorded high priority in the fight against disinformation. In this article, I examine selected legislation implemented to regulate the spread of disinformation online. I also scrutinise two selected social media platforms – Twitter and Facebook – to anchor my discussion. In doing so, I consider what these platforms have done to self and co-regulate. Thereafter, I consider the limitations on regulation posed by certain behavioural norms of users. I argue that shaping user norms lie at the heart of the regulatory approaches discussed and is pivotal to regulating disinformation effectively." (Abstract)
more
"The affordances of social media potentially amplify the effects of disinformation by offering the possibility to present deceptive content and sources in credible and native ways. We investigate the effects of two aspects related to the dissemination and modality of digital disinformation: (In)auth
...
entic references to the ordinary people as sources of disinformation and the multimodal embedding of deceptive content. Using a pre-registered experiment in the United States and India (N = 1008), we found that adding decontextualized visuals to disinformation on climate change did not amplify its effects on credibility or user engagement. Mimicking ordinary citizen cues has a stronger effect than using an alternative hyper-partisan media source to communicate disinformation under certain conditions. Low levels of media trust and preferences for information from the vox populi moderate the effects of citizen-initiated disinformation, suggesting that disenchanted citizens who oppose established information may be most vulnerable to disinformation attacks from social bots or trolls." (Abstract)
more
"The proliferation of misinformation, disinformation, and mal-information (MDM) poses serious challenges to democracy, public safety, and national security. Conversely, these very worries could be used as a front for unjustified ends. There is a global trend toward legislation that may risk infringi
...
ng on press freedoms, civil liberties, and the very democratic and liberal values that protect independent media and safeguard free expression." (Conclusion, page 17)
more
"Media freedom has deteriorated across the world over the past 15 years with populist leaders attacking journalism in both democratic and repressive states. Since the rise of online misinformation and disinformation, concern is growing that governments are using fake news language and related laws t
...
o muzzle the press. Studies find labelling reporters and their stories as fake news can threaten journalistic norms and practices and have implications for trust relationships with sources and audiences. Less understood is the effects of fake news laws on journalism. This article addresses this gap and examines consequences for journalistic practices in Singapore and Indonesia when journalists and sources are targets of fake news laws. Through 20 in-depth expert interviews with journalists, editors, their sources and fake news experts in Indonesia and Singapore, the article identifies “chill effects” on reporting when faced with the threat of new legal sanctions. However, it also identifies adaptations to newsroom practices to manage this threat. We conclude with lessons learned from the Asia Pacific on how journalists in other jurisdictions might manage the potential chilling effects on news reporting when fake news laws are in place." (Abstract)
more
"In many countries, censorship, blocking of internet access and internet content for political purposes are still part of everyday life. Will filtering, blocking, and hacking replace scissors and black ink? This book argues that only a broader understanding of censorship can effectively protect free
...
dom of expression. for centuries, church and state controlled the content available to the public through political, moral and religious censorship. As technology evolved, the legal and political tools were refined, but the classic censorship system continued until the end of the 20th century. However, the myth of total freedom of communication and a law-free space that had been expected with the advent of the internet was soon challenged. the new rulers of the digital world, tech companies, emerged and gained enormous power over free speech and content management. All this happened alongside cautious regulation attempts on the part of various states, either by granting platforms near-total immunity (US) or by setting up new rules that were not fully developed (EU). China has established the Great Firewall and the Golden Shield as a third way. in the book, particular attention is paid to developments since the 2010s, when Internet-related problems began to multiply. the state's solutions have mostly pointed in one direction: towards greater control of platforms and the content they host. Similarities can be found in the US debates, the Chinese and Russian positions on internet sovereignty, and the new European digital regulations (DSA-DMA). The book addresses them all." (Publisher description)
more
"Since its Joint Communication on Hybrid Threats, the EU has publicly recognized the risks to its security posed by non-traditional means aimed at undermining its legitimacy. The propagation of disinformation including misleading political advertising serves as a key example of how the Commission’
...
s perception of the EU’s vulnerability to hybrid threats in times of geopolitical instability is shaping its regulatory policies. This article uses the framework of regulatory mercantilism, which argues that in conditions of perceived vulnerability, a state-like actor will reassert regulatory control based on a security logic in areas previously characterized by self-regulatory regimes. This article considers the Commission’s 2019–2024 priorities, and how the spheres of technology, security, and democracy policies are intersecting as a response to hybrid threats. As a result, online platform governance in the EU is being substantially restructured with a move from systems of selfregulation to co-regulation backed by sanction as a means of combating hybrid threats online. The Commission’s “taking back control” from platforms in the context of a digital sovereignty agenda serves as an example of regulatory mercantilism in digital policy, which sees the Commission seek to promote regulatory strength in response to perceived vulnerability." (Abstract)
more
"This study discusses how and to what extent peace operations are affected by digital disinformation and how international organisations (UN, EU, OSCE and NATO) as mandating bodies for peace operations have responded to limit the effect of disinformation or even prevent it. Based on this assessment
...
of the current situation, the study identifies areas in need of action and suggests options for peace operations. These focus on four areas [Situational awareness; Response; Resilience; Cooperation] and include both short- and long-term measures." (Introduction)
more
"The study specifically focuses on five types of harmful content: a) hate speech and hate narratives; b) denials of war crimes and glorification of war criminals; c) ethno-nationally and/or politically biased media reporting; d) disinformation; and e) attacks, threats and smear campaigns against ind
...
ividuals. After giving overviews of the five types of harmful content, their targets and consequences, the following chapters are dedicated to the legislative, regulatory and self-regulatory frameworks for the five types of harmful content, how effectively they are used online, what the major obstacles are in their implementation and to what extent they are aligned with international standards. The study also addresses the practices of the courts, the Communications Regulatory Agency of Bosnia and Herzegovina, the Press Council of Bosnia and Herzegovina and other relevant actors in countering harmful content. The final parts of the study are dedicated to the community guidelines of social networks and examples of frameworks in other countries. The scope of harmful content online in Bosnia and Herzegovina is worrying and calls for a comprehensive response. The study emphasizes the need to safeguard freedom of expression and to find responses and practices that are aligned with international human rights law and that do not chill or censor online speech or discourage the flow of diverse sources of information and opinions." (Executive summary)
more
"This publication describes the many ways in which public authorities and private enterprises empower users against disinformation online. The first chapter sets the scene by discussing relevant concepts, such as mis-, dis- and malinformation, empowerment, and media literacy. It further discusses th
...
e way in which disinformation affects users, why it has become such an issue, and how to measure it. Chapter 2 presents the international and EU legal and policy framework, with special emphasis on the different measures introduced by the European Union to fight disinformation. Chapter 3 covers responses at national level, highlights some examples of legislative and non-legislative responses to online disinformation in Europe, and shows how states are placing user empowerment at the centre of their approach to the issue. Chapter 4 focuses on self- and co-regulation, providing an overview of the Strengthened EU Code of Practice on Disinformation, delving into the role of national regulatory authorities, and looking at the practical implementation of measures by Big Tech platforms. Chapter 5 presents relevant judgments of both the Court of Justice of the European Union and the European Court of Human Rights in which they had to rule on cases that are connected, directly or indirectly, to the issue of disinformation. Wrapping up the publication, Chapter 6 presents stakeholders’ reactions to the 2022 Code and recent developments at EU level." (Foreword)
more
"In light of the role played by state-aligned actors, the private sector and lawmakers in countries with strong democratic institutions should adopt policies that mitigate the ability of state actors to manipulate AI and weaponize communication platforms. Efforts to combat disinformation must recogn
...
ize that a range of private companies beyond just tech firms are implicated in information manipulation and must put safeguards in place. For example, registration and financing limits on paid PR firms, domestic and foreign, and better oversight by tech platforms on how their platforms are used by state actors is essential. Furthermore, greater transparency about all types of advertising and paid content promotion is needed, not just about political advertising in a handful of Western countries. This could be implemented through existing election laws and paid advertising regulations [...] Any meaningful efforts to combat disinformation will need to address the politicization of social media manipulation and influence operations, and their integration into electoral politics. Lawmakers should implement restrictions on the use of moderation mercenaries, black PR firms, and social media manipulation by those entrusted with public office. Countries should not only require great transparency for the platforms themselves, but should also practice what they preach by adopting transparency requirements for state and government entities related to advertising and outreach on social media and messaging platforms. Tech platforms must reduce the profitability of intentional and opportunistic disinformation efforts, including by reducing the prevalence and ease of plagiarism or the “recycling” of news content for clickbait. Reducing the economic incentives for click-bait, "churnalism", and regurgitated journalistic content would help deter the profit-driven non-ideological actors in these disinformation networks." (Conclsuions and recommendations, page 24-25)
more
"Auf der Grundlage der identifizierten Schutzlücken erarbeitet das Gutachten mögliche Gegenmaßnahmen und beschreibt die nötigen Wirkungsvoraussetzungen. Die zentrale Frage lautet: Welche Risikopotenziale für individuelle und gesellschaftliche Interessen weist Desinformation auf und welche Gover
...
nance-Maßnahmen können darauf adäquat reagieren? Die Beantwortung dieser Leitfrage erfolgt dabei in drei Schritten: Vorangestellt (Kap. 2) werden die in wissenschaftlichen und medienpolitischen Diskussionen differenzierten Erscheinungsformen von Desinformation sowie ihre jeweiligen Begriffsverständnisse zusammengefasst und auf ihre Risikopotenziale hin untersucht. Ziel ist es, die Spannweite betroffener Phänomene aufzuzeigen und sie von anderen Erscheinungsformen und Begrifflichkeiten zu differenzieren. Dabei erfolgt auch eine Bewertung der Abgrenzungsindikatoren im Hinblick auf die Nutzbarkeit für rechtliche bzw. regulatorische Anknüpfungspunkte. Zudem wird hier kurz der Stand der Forschung hinsichtlich der abträglichen Effekte von Desinformation für individuelle und gesellschaftsbezogene Schutzziele einbezogen; Kenntnisse über Wirkungen von Desinformation auf einzelne Rezipientinnen und Rezipienten liegen hier bislang nur lückenhaft vor. Dies steht in gewissem Kontrast zu den eher impliziten Unterstellungen, die den aktuellen Regulierungsforderungen zugrunde liegen. Dort, wo empirische Evidenzen vorliegen, zeigt das Gutachten jedenfalls vermutete Effekte und ihre Risikopotenziale auf. Im zweiten Schritt (Kap. 3) wird der geltende Rechtsrahmen daraufhin untersucht, welche gesetzlichen Vorkehrungen gegen eine Risikorealisierung bereits bestehen und welche untergesetzlichen Initiativen sich auf Ebene von Ko- und Selbstregulierung entwickelt haben, die als Gegenkraft wirken können. An dieser Stelle setzt die Untersuchung die Arbeit des GVK-Gutachtens von Möller, Hameleers und Ferreau fort,5 indem bestehende risikospezifische Schutzlücken mit Blick auf die identifizierten Risikopotenziale herausgearbeitet werden. Dort, wo Schutzlücken erkennbar werden, zeigt das Gutachten staatliche Handlungsmöglichkeiten und -grenzen auf. Im dritten Schritt (Kap. 4) werden regulatorische Ansatzpunkte und -instrumente, die in der Lage sind, die identifizierten Schutzlücken zu schließen, beleuchtet. Klassische Ansätze der Medienregulierung eignen sich hier meist begrenzt, da für den Bereich der öffentlichen Kommunikation der Grundsatz gilt, dass es nicht staatliche Aufgabe sein kann und darf, über die Einstufungen wahr/unwahr oder erwünschte Meinung/unerwünschte Meinung zu befinden. Hier müssen – soweit überhaupt Handeln angezeigt ist – Wege staatsferner, prozeduraler Steuerung betreten6 oder alternative Formen von inhalts- und technikbezogener Governance entwickelt werden. Alternativ oder ergänzend kommen neben Maßnahmen, die diskursermöglichend oder -unterstützend wirken, auch Gegenmaßnahmen in Betracht, die informationsintegritätssteigernde oder -integrierende Wirkungen haben können." (Seite 4-5)
more
"In the present report, the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression examines the threats posed by disinformation to human rights, democratic institutions and development processes. While acknowledging the complexities and challenges posed
...
by disinformation in the digital age, the Special Rapporteur finds that the responses by States and companies have been problematic, inadequate and detrimental to human rights. She calls for multidimensional and multi-stakeholder responses that are well grounded in the international human rights framework and urges companies to review their business model and States to recalibrate their responses to disinformation, enhancing the role of free, independent and diverse media, investing in media and digital literacy, empowering individuals and rebuilding public trust." (Summary)
more
"This article examines the ongoing dynamics in the regulation of disinformation in Europe, focusing on the intersection between the right to freedom of expression and the right to privacy. Importantly, there has been a recent wave of regulatory measures and other forms of pressure on online platform
...
s to tackle disinformation in Europe. These measures play out in different ways at the intersection of the right to freedom of expression and the right to privacy. Crucially, as governments, journalists, and researchers seek greater transparency and access to information from online platforms to evaluate their impact on the health of their democracies, these measures raise acute issues related to user privacy. Indeed, platforms that once refused to cooperate with governments in identifying users allegedly responsible for disseminating illegal or harmful content are now expanding cooperation. However, while platforms are increasingly facilitating government access to user data, platforms are also invoking data protection law concerns as a shield in response to recent efforts at increased platform transparency. At the same time, data protection law provides for one of the main systemic regulatory safeguards in Europe. It protects user autonomy concerning data-driven campaigns, requiring transparency for internet audiences about targeting and data subject rights in relation to audience platforms, such as social media companies." (Abstract)
more