Do you know your algorithms from your Gen-Z? This A to Z provides examples of how false or misleading information can be spread and ways to combat it.

Algorithms 

Social media algorithms can amplify the spread of misinformation via recommender systems. The algorithms are curated to provide recommendations per the users’ interests, search history and likes. So, users may be exposed to further misinformation if the algorithm identifies them engaging with the content.

Bots 

Misinformation can be disseminated by humans and automated online accounts, known as bots. Bots are widespread on social media platforms and, by emulating human social interactions, can mimic human users on platforms such as X (formerly Twitter), engage in excessive posting, retweeting of polarising news content, and reference influential figures. Not all bots are ‘bad’, but they can be used to amplify misinformation to manipulate political discourse. 

Cognitive bias 

The underlying mechanism that leads to accepting misinformation. Well-known cognitive biases include familiarity (relying on information that is already familiar), availability (relying on information that comes readily to mind), and confirmation (processing information in a way that supports previous beliefs) which can influence an individual’s judgements and decision-making when differentiating between factual information and fallacious claims. 

Disinformation 

False information that is spread intentionally. Disinformation is often used to distort public perception for personal or political gain. Extremist groups often use false information to garner public support, invoking fear and recruiting members through manipulation.  

Exposure 

Exposure and sharing are connected, yet they are distinct concepts. Most people share a small percentage of the material they are exposed to. Therefore, looking at what someone shares gives a restricted view of their information environment. 

Fake News 

Often used to describe false information. However, the term is less descriptive and useful than other terms such as ‘disinformation’ or ‘misinformation’. Fake news has also ironically been used to politicise and justify misinformation, where the term is used to denigrate factually correct information produced by opponents. 

Games 

Prebunking describes proactively refuting false information before people fall for it. One way in which this can be done is through educational games. These games teach people about possible manipulation techniques that might be used against them. 

Hostile state actors 

Hostile states spread misinformation by investing in public discourse influence. These nations may believe they will benefit from such initiatives. One example is the well-evidenced activities of the Russian, St. Petersburg based Internet Research Agency. 

Inoculation 

If we conceptualise misinformation as akin to a virus that spreads through society, we can inoculate against it. Psychological inoculations aim to give people the ‘mental antibodies’ to resist persuasion from misinformation. Inoculation works by pre-emptively warning people that they might be manipulated and then giving them the skills to identify misinformation. 

Journalist investigations 

Individuals with journalistic backgrounds do much of the work on fact-checking and debunking. Examples include BBC Reality Check, Bellingcat and PolitiFact.com, operated by the Poynter Institute. 

Knowledge Revision 

Even after receiving a correction and accepting it as true, misinformation can continue to shape people’s beliefs. The continued influence effect (CIE) describes this process. Beyond the laboratory, CIE has been demonstrated for real-world events such as the 2003 Iraq war WMD, and vaccines and autism. 

Likes 

A form of engagement on social media where the user shows others that they like the content posted by simply clicking a button. The visible likes and shares counter can significantly influence interaction with low-credibility information. People are more likely to share questionable content and less likely to fact-check it as engagement rises. 

Misinformation 

The sharing of inaccurate and misleading information unintentionally. The rise of social media has brought concerns over misinformation to the fore, with the number of academic and policy-related articles on misinformation showing an exponential increase over time. As with any messaging, when considering misinformation, the source, message, context, and receiver are all important considerations in analysing its likely impact. 

New media literacy 

A major component in combating misinformation over the long term is new media literacy strategies. This educational intervention aims to improve people’s ability to discern accurate and inaccurate news content on social media. For example, media literacy interventions might teach people to identify low-credibility news sources.  

Online Safety Bill 

Final amendments were made to the bill during the third reading in the House of Lords on Wednesday 6th September 2023. The bill now returns to the House of Commons for further consideration. The Online Safety Bill is to quote a ‘new set of laws to protect children and adults online. It will make social media companies more responsible for their users’ safety on their platforms - HMG’. It would mandate that search engines and “user-to-user” applications filter illegal content. How the bill will effectively tackle mis - and dis - information is the subject of considerable debate. 

Persuasion 

Persuasion can be used to disseminate misinformation by skillfully and convincingly presenting false or misleading information. Misinformation can be spread through persuasive techniques that appeal to people’s emotions, biases, and preconceived beliefs, making them more susceptible to accepting inaccurate or misleading claims. Understanding the principles of persuasion is crucial for recognizing and countering misinformation 

QAnon 

QAnon, an American political conspiracy theory and political movement, which had its first supporting congresswoman in Georgia, Republican Marjorie Taylor Greene. In a recent study by Wu and colleagues, official Republican or Democratic condemnation of Greene decreased positive views of QAnon but not Greene. The authors conclude that their “results suggest that public officials have a unique responsibility to criticize misinformation, but they also highlight the difficulty in shifting attitudes toward politicians who embrace and spread falsehoods” (whether intentional or unintentional). 

Rumours 

Rumours are often the breeding ground for political misinformation and conspiracy theories. Rumours can spread misinformation, leading individuals to believe and share false information. Peterson has drawn attention to the power of ‘hostile political rumours’, which can shape political outcomes by inciting hostility toward a specific politician or political group even when factual evidence for the rumour is scant. 

Spoofing 

Where someone falsely claims to be someone else or falsely adopts a social standing or identity. Information spoofing includes falsifying, suppressing, or amplifying messages and may serve to influence public understanding of events. Spoofing (together with ‘truthing’ and ‘social proofing’) on digital platforms was observed by Innes and colleagues following the 2017 UK terrorist attacks 

Trust 

Trust in the context of misinformation relates to the complex relationship between social media platforms, policy-makers, and users. Users generally express trust concerns regarding misinformation and data use, censorship, freedom of speech, and the interplay between these issues. 

Users 

Social media users and their behaviour are integral when analysing the spread of misinformation. In one large, well-known longitudinal study, Vosoughi and colleagues found that human users (and not bots) were responsible for the dramatic spread of false news online, which was 70% more likely to be retweeted. The authors suggested that the degree of novelty and the emotional reactions of recipients may be responsible for the rapid and widespread online diffusion of falsehoods over the truth.  

Visual disinformation, memes and deep fakes 

Weikmann and Lecheler have noted that visual disinformation may be classed along two dimensions: (1) audio-visual richness, i.e., whether static or moving pictures are employed, and (2) manipulation sophistication, i.e., whether low-level or high-level creative techniques are used. Memes can be an important vehicle for spreading misinformation and will often aim to invoke emotions such as fear, anger and empathy by using humour. Recent times have seen the emergence of deep fakes utilising machine learning, making discerning true and false audio-visual information difficult. 

Worldviews 

Worldviews influence misinformation spread and reception. People will likely believe and disseminate misinformation that matches their values and views. These established worldviews can encourage false information and hinder the critical examination of data. 

Xenophobia 

Misinformation can promote xenophobic attitudes. Xenophobic misinformation is particularly prominent in false narratives regarding migration and refugees. Xenophobic misinformation narratives in this context falsely claim that refugees and migrants are a danger to society. 

YouTube 

As a social media platform with billions of daily views, YouTube has the potential to aid and worsen the spread of misinformation. Several investigations have examined whether the YouTube recommender system facilitates pathways to misinformation content. However, due to the limitations of these studies, such as algorithmic access, it is difficult to conclusively analyse this issue. 

Z Generation Z (18-24 yrs) 

The Reuters Institute at Oxford University has suggested that social networks have steadily replaced news websites as a primary source for younger audiences overall, with Instagram, TikTok, and YouTube becoming increasingly popular for news amongst this group. In 2022, 39% of 18–24s used social media as their main news source, compared with 34% who preferred to go directly to a news website or app. Younger audiences were also the lowest-trusting age groups, with only a third ‘trusting most news most of the time’, with substantial rises in avoidance of the news compared with older age groups. Whether Gen Z, with different internet habits, will be more or less prone to disinformation remains a considerable debate. 


Muhsin Yesilada is a Doctor of Philosophy in the School of Psychological Science at the University of Bristol. Paul Grasby is a Research to Practice Fellow at CREST. The authors wish to thank Professor Tom Buchanan and Professor Steven Lewandowsky for their initial advice on the terms to include in this piece. 

Read more

Basol, M., Roozenbeek, J., & Van der Linden, S. (2020) Good news about bad news: Gamified inoculation boosts confidence and cognitive immunity against fake news. Journal of Cognition, 3(1). https://doi.org/10.5334/joc.91  

Innes M., Dobreva, D. & Innes, H. (2021) Disinformation and digital influencing after terrorism: spoofing, truthing and social proofing, Contemporary Social Science, 16:2, 241-255.  https://doi.org/10.1080/21582041.2019.1569714  

Lewandowsky, S., & Yesilada, M. (2021) Inoculating against the spread of Islamophobic and radical-Islamist disinformation. Cognitive Research: Principles and Implications, 6, 1-15.  https://doi.org/10.1186/s41235-021-00323-z  

Petersen, M., Osmundsen, M., & Arceneaux, K. (2023) The “Need for Chaos” and Motivations to Share Hostile Political Rumors. American Political Science Review, 1-20.  https://doi.org/10.1017/S0003055422001447  

Reuters (2022) The changing news habits and attitudes of younger audiences.  https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2022/young-audiences-news-media 

Vosoughi, S., Roy, H. & Aral, S. (2018) The spread of true and false news online. Science, 359(6380), 1146-1151. https://www.science.org/doi/10.1126/science.aap9559 

Weikmann, T. & Lecheler, S. (2022) Visual disinformation in a digital age: A literature synthesis and research agenda. New Media & Society. https://doi.org/10.1177/14614448221141648  

Wu, V., Carey, J., Nyhan, B., & Reifler, J. (2022) Legislator criticism of a candidate’s conspiracy beliefs reduces support for the conspiracy but not the candidate: Evidence from Marjorie Taylor Greene and QAnon. Harvard Kennedy School (HKS) Misinformation Review, 3(5).  https://doi.org/10.37016/mr-2020-103   

Yesilada, M. & Lewandowsky, S. (2022) Systematic review: YouTube recommendations and problematic content. Internet Policy Review.  https://doi.org/10.14763/2022.1.1652