Shortly after the Westminster Bridge terror attack in 2017, an image of a woman in Islamic dress, apparently walking past victims, was posted on social media with a caption designed to stoke Islamophobic sentiment. It was widely shared. Following the Salisbury poisonings of 2018, numerous claims were circulated online attributing blame to the UK security services. In the run-up to the 2018 US mid-term elections, inflammatory social media posts played on concern about an approaching ‘migrant caravan’, making numerous false claims designed to influence voter behaviour. All of these were examples of disinformation: ‘Fake News’ designed to provoke extremist sentiment, influence political processes, or seed distrust and confusion in society.

Politically-motivated disinformation on social media is seen as a significant problem. Its effects are pernicious and wide-ranging, to the extent that the House of Commons Digital, Culture, Media and Sport Committee’s inquiry into ‘Fake News’ concluded in 2018 that ‘our democracy is at risk’.  To increase the effectiveness of anti-disinformation campaigns, we need to know more about how disinformation is spread online.

Individual social media users are key to the spread of disinformation online. By interacting with disinformation, they share it with their own social networks. This can greatly increase its reach and potential impact on society. Why do people do this? Are they fooled by the disinformation, and spread it because they believe it is true? Do they know the information is fake but spread it anyway? How does the way disinformation is presented influence our likelihood of sharing it? Are some people more likely to share disinformation than others?

This project will address those questions, in a series of experiments testing whether characteristics of a disinformation message (such as whether it appears to come from an authoritative source) and of the individual (such as whether they have a high or low level of digital media literacy) influence the likelihood of that person sharing the message online. It will also test whether the same mechanisms occur across different social media platforms.

Understanding human factors influencing the spread of disinformation will be of value in trying to counteract it. For example, if digital media literacy is found to be important, this would support the idea of public education campaigns about online disinformation being of value. If not, then other approaches could be more effective. The findings of this project will inform what those approaches might be.

Project resources

Subscribe to the CREST newsletter.

Get the latest news, events and research into security threats delivered directly to your inbox.
Sign up now
Back to top