"This study explores the various methods of combating fake news on social media such as Natural Language Processing, Hybrid model. We surmised that detecting fake news is a challenging and complex issue, however, it remains a workable task. Revelation in this study holds that the application of hybr
...
id-machine learning techniques and the collective effort of humans could stand a higher chance of fighting misinformation on social media." (Abstract)
more
"Auf der Grundlage der identifizierten Schutzlücken erarbeitet das Gutachten mögliche Gegenmaßnahmen und beschreibt die nötigen Wirkungsvoraussetzungen. Die zentrale Frage lautet: Welche Risikopotenziale für individuelle und gesellschaftliche Interessen weist Desinformation auf und welche Gover
...
nance-Maßnahmen können darauf adäquat reagieren? Die Beantwortung dieser Leitfrage erfolgt dabei in drei Schritten: Vorangestellt (Kap. 2) werden die in wissenschaftlichen und medienpolitischen Diskussionen differenzierten Erscheinungsformen von Desinformation sowie ihre jeweiligen Begriffsverständnisse zusammengefasst und auf ihre Risikopotenziale hin untersucht. Ziel ist es, die Spannweite betroffener Phänomene aufzuzeigen und sie von anderen Erscheinungsformen und Begrifflichkeiten zu differenzieren. Dabei erfolgt auch eine Bewertung der Abgrenzungsindikatoren im Hinblick auf die Nutzbarkeit für rechtliche bzw. regulatorische Anknüpfungspunkte. Zudem wird hier kurz der Stand der Forschung hinsichtlich der abträglichen Effekte von Desinformation für individuelle und gesellschaftsbezogene Schutzziele einbezogen; Kenntnisse über Wirkungen von Desinformation auf einzelne Rezipientinnen und Rezipienten liegen hier bislang nur lückenhaft vor. Dies steht in gewissem Kontrast zu den eher impliziten Unterstellungen, die den aktuellen Regulierungsforderungen zugrunde liegen. Dort, wo empirische Evidenzen vorliegen, zeigt das Gutachten jedenfalls vermutete Effekte und ihre Risikopotenziale auf. Im zweiten Schritt (Kap. 3) wird der geltende Rechtsrahmen daraufhin untersucht, welche gesetzlichen Vorkehrungen gegen eine Risikorealisierung bereits bestehen und welche untergesetzlichen Initiativen sich auf Ebene von Ko- und Selbstregulierung entwickelt haben, die als Gegenkraft wirken können. An dieser Stelle setzt die Untersuchung die Arbeit des GVK-Gutachtens von Möller, Hameleers und Ferreau fort,5 indem bestehende risikospezifische Schutzlücken mit Blick auf die identifizierten Risikopotenziale herausgearbeitet werden. Dort, wo Schutzlücken erkennbar werden, zeigt das Gutachten staatliche Handlungsmöglichkeiten und -grenzen auf. Im dritten Schritt (Kap. 4) werden regulatorische Ansatzpunkte und -instrumente, die in der Lage sind, die identifizierten Schutzlücken zu schließen, beleuchtet. Klassische Ansätze der Medienregulierung eignen sich hier meist begrenzt, da für den Bereich der öffentlichen Kommunikation der Grundsatz gilt, dass es nicht staatliche Aufgabe sein kann und darf, über die Einstufungen wahr/unwahr oder erwünschte Meinung/unerwünschte Meinung zu befinden. Hier müssen – soweit überhaupt Handeln angezeigt ist – Wege staatsferner, prozeduraler Steuerung betreten6 oder alternative Formen von inhalts- und technikbezogener Governance entwickelt werden. Alternativ oder ergänzend kommen neben Maßnahmen, die diskursermöglichend oder -unterstützend wirken, auch Gegenmaßnahmen in Betracht, die informationsintegritätssteigernde oder -integrierende Wirkungen haben können." (Seite 4-5)
more
"In the present report, the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression examines the threats posed by disinformation to human rights, democratic institutions and development processes. While acknowledging the complexities and challenges posed
...
by disinformation in the digital age, the Special Rapporteur finds that the responses by States and companies have been problematic, inadequate and detrimental to human rights. She calls for multidimensional and multi-stakeholder responses that are well grounded in the international human rights framework and urges companies to review their business model and States to recalibrate their responses to disinformation, enhancing the role of free, independent and diverse media, investing in media and digital literacy, empowering individuals and rebuilding public trust." (Summary)
more
"First, disinformation is enabled by the Philippines' history of colonialism and martial law, high social media usage and low digital literacy, compounding crises, strongman governance, and sexist and misogynist rhetoric from elected leaders. While the first three factors create conditions that gene
...
rally enable disinformation, the final two factors directly contribute to the prevalence of gendered disinformation. Second, disinformation is used as a tool to confuse, distract, revise, and discredit, with the aim of suppressing dissent. More critically, analysis of Twitter data indicates that disinformation used to discredit relies on the policing of gender to undermine political opposition. Third, thsi policing of gender results in the weaponization of gendered relationships, which encourages narratives that reinforce gender inequalities. Finally, illiberal actors benefit from an environment marked by gender inequality, as such conditions support hegemonic masculine norms, which in turn consolidate authoritarian power. As a result, President Duterte and his supporters benefit from disinformation that encourages gender inequality and pursue disinformation as a tactic for weakening democratic governance in the Philippines." (Executive summary)
more
"Based on key term searches and forward and backward citation mapping, we constructed a review of 223 studies published since 1972 related to countermeasures designed to combat influence operations. Each identified study included: (1) a source of variation in exposure to countermeasures; (2) a clear
...
ly defined outcome of interest for some specified population; (3) relevance to thinking about the potential of an intervention to impact real-world behavior; and (4) enough detail to evaluate the credibility of the findings. This approach amounts to sampling the foundational research surrounding countermeasures and thus incorporates the collective judgement of this emerging field. All of the studies we identified examined user-focused countermeasures, i.e., those aimed at the consumers of disinformation. None looked at countermeasures aimed at impacting the influence operations directly. There exists a mismatch between the major interventions taken by platforms - algorithmic downranking, content moderation, redirection, and deplatforming accounts - and those studied by the research community. Most papers we reviewed focus on one particular method for countering information operations: fact-checking and its many offshoots. The types of interventions employed by social media companies on actual users are understudied. We recommend further research on four key areas: (1) measuring the impact of the most common interventions by social media platforms, (2) assessing the impact of countermeasures on real-world behaviors (both online and offline), (3) evaluating the efficacy of countermeasures in non-Western contexts, and (4) studying countermeasures that target the creators of disinformation content in addition to studying consumer-facing policies." (Essay summary)
more
"Disinformation is proliferating on the internet, and platforms are responding by attaching warnings to content. There is little evidence, however, that these warnings help users identify or avoid disinformation. In this work, we adapt methods and results from the information security warning litera
...
ture in order to design and evaluate effective disinformation warnings. In an initial laboratory study, we used a simulated search task to examine contextual and interstitial disinformation warning designs. We found that users routinely ignore contextual warnings, but users notice interstitial warnings—and respond by seeking information from alternative sources. We then conducted a follow-on crowdworker study with eight interstitial warning designs. We confirmed a significant impact on user information-seeking behavior, and we found that a warning’s design could effectively inform users or convey a risk of harm. We also found, however, that neither user comprehension nor fear of harm moderated behavioral effects. Our work provides evidence that disinformation warnings can—when designed well—help users identify and avoid disinformation. We show a path forward for designing effective warnings, and we contribute repeatable methods for evaluating behavioral effects. We also surface a possible dilemma: disinformation warnings might be able to inform users and guide behavior, but the behavioral effects might result from user experience friction, not informed decision making." (Abstract)
more
"[...] this Research Report has selected four country case studies: Sweden, Canada, the United Kingdom, and France. Obviously, other cases would have been interesting, particularly the United States. But the United States is already at the centre of other works, including by Hybrid CoE. Being divers
...
e in terms of power, geopolitical situation, and systems of government, the four selected countries offer a good sample of what liberal democracies, different in colour, shape and size, can propose to counter disinformation. Finally, this Research Report will attempt to draw some general lessons from these four cases, on what an effective state response to disinformation should involve." (Page 9)
more
"Regarding media and information literacy, the Kosovo Government and relevant education institutions, such as the Ministry of Education, should urgently introduce subjects that will be taught in school to provide a better understanding of the media and information literacy. Kosovo’s educational in
...
stitutions should increase the teaching of critical thinking and the online sphere to improve inflammatory language and inappropriate ethnic slurs in the online space. Self-regulation bodies should hold discussions with their members and urge them to take action in the comments sections of their online media, social media and networks when it appears. Media organizations should increase their fact-checking mechanisms / newsrooms and remind journalists of the Code of Ethics more often." (Policy reommendations)
more
"In Moldova, a series of hackathons led to the development of tech-based solutions to misinformation. In Ecuador, indigenous groups wrote their own stories on Wikipedia to strengthen their culture's representation and publicly correct misinformation. In Uganda, citizen journalists established a netw
...
ork to report on underrepresented issues and groups. And in the Middle East, innovative concepts in journalism training are helping the next generation of journalists to become fit for the challenges of the future. These four case studies illustrate the approaches that DW Akademie and its partners are pursuing worldwide to strengthen the public dialogue. The goal is to foster innovation and increase the visibility of underrepresented topics, and to bring together innovators and experts to pool their knowledge and skills." (Publisher description)
more
"Civil Society Organisations (CSOs) bring a wide range of skill sets to the problem of digital disinformation. Some organizations focus on digital media literacy and education; others engage in advocacy and policy work. Another segment has developed expertise in fact-checking and verification. Other
...
organizations have developed refined technical skills for extracting and analyzing data from social media platforms. This research yielded several clear observations about the state of CSO responses to disinformation and, in turn, suggests several recommendations for paths forward. • Prioritize Skill Diffusion and Knowledge Transfer. Civil society organizations seeking funding for counter-disinformation initiatives should emphasize the importance of skill diffusion and knowledge-transfer initiatives. The siloed nature of disinformation research points to a growing need to blend technical expertise with deep cultural and political knowledge. • CSO researchers lack sufficient access to social media data. Survey respondents identified insufficient access to data as a challenge. Sometimes data are not made available to CSOs; in other instances, data are made available in formats that are not workable for meaningful research purposes. Unequal access to the data that private companies do provide can exacerbate regional inequities, and the nature of data sharing by social media platforms can unduly shape the space for inquiry by civil society and other researchers. Funders, platforms, and other key actors should develop approaches that provide more consistent, inclusive data access to CSOs. • Duplicative programming hampers innovation. CSOs drawing on similar tools, approaches, and techniques to meet similar goals pointed to three main factors preventing more specialized, innovative initiatives: lack of coordination, lack of specific expertise, and lack of flexible funding. Community building and collaboration among relevant organizations deserve more investment, as do initiatives that partner larger, established organizations with smaller or growing ones, or pool efforts, skill sets, and expertise to encourage diverse research by design rather than by coincidence. • Relationships with tech platforms vary across regions. Surveyed CSOs often held simultaneously skeptical and positive opinions about their relationships with social media companies. Some receive preferential access to data and even funding for their work (raising concerns about independence), while others report a lack of responsiveness from company representatives. In the Global South and Eastern Europe, many CSOs expressed concern that platforms failed to meaningfully engage with them on issues of critical concern. • More flexible funding and more diverse research are both necessary. To encourage greater platform accountability across varied geographic contexts, CSOs and their funders should draw on the perspectives of specific, under-analyzed communities." (Executive summary, page 3-4)
more
"Disinformation undermines human rights and many elements of good quality democracy; but counter-disinformation measures can also have a prejudicial impact on human rights and democracy. COVID-19 compounds both these dynamics and has unleashed more intense waves of disinformation, allied to human ri
...
ghts and democracy setbacks. Effective responses to disinformation are needed at multiple levels, including formal laws and regulations, corporate measures and civil society action. While the EU has begun to tackle disinformation in its external actions, it has scope to place greater stress on the human rights dimension of this challenge. In doing so, the EU can draw upon best practice examples from around the world that tackle disinformation through a human rights lens. This study proposes steps the EU can take to build counter-disinformation more seamlessly into its global human rights and democracy policies." (Abstract)
more
"As of July 2021, Telegram had 550 million active users worldwide – more than the individual user bases of Twitter, Snapchat or Discord. It is the fifth most-popular messaging app after Facebook-owned Whatsapp and Messenger, and WeChat and QQ which dominate the Chinese market [...] For this paper,
...
I looked at Telegram’s policies and functionalities to help understand what made it so attractive to misinformation actors both in the Ukraine, which has a long history of Telegram engagement, and Brazil, Spain and Germany where it has had more of an impact in recent years. According to the journalists and digital researchers I interviewed about investigating misinformation and disinformation on Telegram, there are ways to address the issue, both on and off the platform: by investigating movements and their political or financial interest, by producing more responsible journalism, through clearer communication from governments, and through the continued moderation efforts on other social media platforms." (Pages 7-8)
more
"This publication is a collection of a variety of outlooks, recommendations, and input from the participants of the 2020 workshop [for fellows of the CrossCulture Programme (CCP) of the Institut für Auslandsbeziehungen] and others. On the subject of digital access, CCP alumnus Camilo Olea speaks ab
...
out the digital divide in Mexico and how his organisation is providing access to indigenous rural communities. The German NGO Superrr demands an open digital infrastructure and more open-source software for a more inclusive digital sphere. Ali (name changed), a Bangladeshi journalist and CCP alumnus, gives an overview of the current state of free speech in Bangladesh. CCP alumna Hend Kheiralla from Sudan shares her view on the role of social media during the Sudanese Revolution and the impact of the internet shutdown. Having experienced severe discrimination online herself, a CCP alumna from Jordan talks about her experiences and the impact of attacks as well as strategies for dealing with them. Love Storm, a German NGO that focuses on countering hatred online, suggests specific measures we can start using directly to create a safe and inclusive online space for everyone." (Editorial, page 3)
more
"The publication is focused on the ways fake news, disinformation, misinformation and hateful statements are spread across society, predominantly within the online environment. Its main ambition is to offer an interdisciplinary body of scholarly knowledge on fake news, disinformation and propaganda
...
in relation to today's journalism, social development, political situation and cultural affairs happening all around the world." (Publisher description)
more
"Fondation Hirondelle's approach to disinformation centres on the fundamental principles of journalism and on the lessons learned from over 25 years of applying these principles in highly fragile contexts, where access to reliable information for the majority is not a given, and where rumours, hate
...
speech and propaganda undermine peace building and development. Our response to disinformation is based on two complementary axes: sticking to the facts and building trust." (Our approach, page 2)
more
"The Philippines is one of the first countries where the potential for online disinformation threats to undermine democratic processes, especially during elections, was noticed [...] This report takes a deep look at an online survey that Internews conducted, explores the cultural and emotional dimen
...
sions of disinformation and how they form part of the broader political transformations taking place in the Philippines, examines how the Philippine disinformation ecosystem fits into the regional landscape, looks into financial incentives and legislation, and formulates a set of strategic and programmatic recommendations to better tackle the issue of disinformation in the Philippines." (https://internews.org)
more
"The five research streams are listed below. For each stream, three top research questions were identified, resulting in a list of 15 top priority research questions for the public health research agenda for infodemic management. Further, we listed for each subcategory a second tier of important res
...
earch questions, totalling 50 questions [...] Research stream 1: Measure and monitor the impact of infodemics during health emergencies [...] Research stream 2: Detect and understand the spread and impact of infodemics [...] Research stream 3: Respond and deploy interventions that protect against the infodemic and mitigate its harmful effects [...] Research stream 4: Evaluate infodemic interventions and strengthen the resilience of individuals and communities to infodemics [...] Research stream 5: Promote the development, adaptation and application of tools for managing infodemics ..." (Annex 1, page 19 ff.)
more
"This case study examines social networks as the modern intersections of radical discourse and political extremism. But, as this research will show, extremist content in social networks, even that which has telegraphed violent hate crimes, is seldom communicated in textbook forms bigotry or provocat
...
ions of violence. Today, the true challenge for social networks like Facebook and Twitter is addressing hate speech that reads more like fear mongering and identity politics, and thus, does not get flagged by monitors. From accounts dedicated to inciting fear over the “threat of immigrants” or “black crime,” to groups that form around hashtags declaring that a “#whitegenocide” is underway. These narratives represent the more ubiquitous versions of hate culture that permeate these popular spaces and radicalize cultural discourses happening there. This case study explores how such rhetoric has the same capacity to deliver messages of hate, and even incite violence, by investigating six hate crimes from 2019 that were preceded by social media diatribes. The comparative analysis will show how these examples mostly featured nonviolent expressions of cultural paranoia, rather than avowals of violence or traditional hate speech, thus making them harder to detect by programs seeking out such threats in plain sight. The research then examines the user policies of leading social networks to assess whether their guidelines on hateful and violent content are purposed to address the kinds of language that were espoused by these violent extremists. The study considers the strategies being employed by social networks to expose hateful content of all forms, and the need for more prominent counter narratives." (Abstract)
more