A look at how people discuss false content online and how exploring social media discourses can help strengthen policy responses.

In February 2024, the European Union’s ‘Digital Services Act’ (DSA) will come into effect. The DSA will enforce a standard of transparency on very large social media platforms, obliging them to lay out how their sophisticated, proprietary content recommendation algorithms work. The act is in response to years of algorithmically fuelled disinformation that has undermined public trust and led to real-world harms (Jolley & Paterson, 2020; Wardle & Singerman, 2021).  

Algorithms and the spread of disinformation are inexorably linked. Algorithmic recommender systems that suggest new content to users may serve as a vector between disinformation producers and social media users, potentially delivering false and harmful content. Understanding these systems, their effects, and public perceptions of algorithms is vital to forming legislation that responds to such threats. 

Public perceptions of disinformation  

My research uses corpus linguistic approaches to study the replication and reception of online disinformation on social media. I focus on how, linguistically, people share false content online and how ideas on the internet spread from their inception until they cease to exist. This involves exploring metacommentary around disinformation, or more simply looking at how people talk about disinformation itself.   

Understanding how the public talk about important topics is a tried-and-tested method for understanding them with greater nuance, whether it’s discourses of Islam (Baker et al., 2013), discussions of vaccination (Coltman-Patel et al., 2022), or exploring hate speech online (Hardaker & McGlashan, 2016). Disinformation poses a security threat by clouding decision-making at both individual and national levels. Understanding how the public perceives disinformation is crucial to mitigating its effects.  

Algorithmic disinformation is a complex issue that requires an equally complex solution, combining regulation, policy, education, and fact-checking. But what if the public do not always see it as a concern? In an analysis of almost 40,000 tweets* spanning the first six months of 2022 containing the words disinformation, misinformation, or fake news, the word ‘algorithm’ is mentioned in just 24 tweets. To put this in perspective, the word ‘dog’/’dogs’, something unrelated to the topic at hand, is mentioned in 31 tweets. That is to say, people discuss dogs more often than they do algorithms in relation to disinformation, misinformation, and fake news in the dataset.  

...people discuss dogs more often than they do algorithms in relation to disinformation, misinformation, and fake news... 

This has implications for how we tackle algorithmic disinformation online because if awareness is low, policy responses such as the DSA may be viewed as disproportionate in scale in terms of public perceptions of the issue. Algorithms are fundamental to social media and the spread of disinformation online. While a lack of explicit mention does not imply a complete lack of knowledge, there seems to be an awareness gap. This data offers a snapshot of discussions, and given the extensive policy responses to disinformation, it is vital to learn from these findings.  

When the public does discuss disinformation, they are keenly aware of its dangers. Online discussions specifically highlight the threat to democracy caused by disinformation, how it infringes on human rights, and its disproportionate impact on issues such as reproductive healthcare. Throughout, disinformation is framed as an enemy, something we should fight and combat. There is, however, a paradox here. Research has shown that simply discussing disinformation and its negative effects can affect key metrics such as trust and cynicism (Jones-Jang et al., 2020; Vaccari & Chadwick, 2020). Therefore, when addressing disinformation, we need to be aware that overexposure to the topic can do more harm than good.  

...when addressing disinformation, we need to be aware that overexposure to the topic can do more harm than good. 

Informing Policy Responses  

The public is aware of disinformation’s harmful potential to threaten civil liberties and impact our institutions but they are not necessarily familiar with the nuances of how disinformation spreads through technologies such as algorithms. Responses to disinformation should prioritise the human aspect, and the technical and social aspects of disinformation should not be seen as separate but rather as interconnected elements. Examining people’s real-world concerns in natural settings helps us grasp what troubles them and how changes in our online information environments can tackle the genuine worries related to the dissemination of disinformation.   

Further, it is crucial to ground policy responses to security threats in real-world situations for an effective approach. Policies that address the public’s genuine concerns are more likely to garner public support and foster positive change, helping to reduce the impact of disinformation. This includes addressing health threats such as disinformation that rejects conventional medicine and responding to information operations that use disinformation as a medium to undermine democracy. The individuals most at risk from disinformation are the public themselves, and it is their concerns that should guide our response to disinformation.


William Dance is a PhD student and Senior Research Associate in the ESRC Centre for Corpus Approaches to Social Science at Lancaster University. His research combines historical approaches to studying language with the analysis of contemporary social media datasets to explore the development of disinformation over centuries. 

*Twitter is now called X, and tweets are now called posts. 

Read more

Baker, P., Gabrielatos, C., & McEnery, T. (2013) Discourse Analysis and Media Attitudes: The Representation of Islam in the British Press. Cambridge: Cambridge University Press.  https://doi.org/10.1017/CBO9780511920103 

Coltman-Patel, T., Dance, W., Demjén, Z., Gatherer, D., Hardaker, C., & Semino, E. (2022) ‘Am I being unreasonable to vaccinate my kids against my ex’s wishes?’ – A corpus linguistic exploration of conflict in vaccination discussions on Mumsnet Talk’s AIBU forum. Discourse, Context & Media, 48. https://doi.org/10.1016/j.dcm.2022.100624   

Hardaker, C. & McGlashan, M. (2016) “Real men don’t hate women”: Twitter rape threats and group identity. Journal of Pragmatics, 91, 80-93. https://doi.org/10.1016/j.pragma.2015.11.005  

Jones-Jang, S. M., Kim, D. H., & Kenski, K. (2020) Perceptions of mis- or disinformation exposure predict political cynicism: Evidence from a two-wave survey during the 2018 US midterm elections. New Media & Society, 23(10), 3105-3125. https://doi.org/10.1177/1461444820943878   

Vaccari, C. & Chadwick, A. (2020) Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. Social Media + Society, 6(1).  https://doi.org/10.1177/20563051209034