Disinformation does not spread on its own. It is spread by people who decide to spread it. Extremist islamophobia and Islamist extremism do not arise on their own. They arise when vulnerable people are exposed to increasingly extreme disinformation, either about expansionist tendencies of Islam or about western hostility against Muslims. Containment of extremism therefore demands the development of tools to control the impact and spread of disinformation.
This is particularly challenging because existing counter-extremist narratives have often turned out to be inadvertently counter-productive and because it is notoriously difficult to correct misinformation once it has been accepted as true by the recipients.
This project therefore takes a different approach by examining the efficacy of inoculation against misinformation. Ben Franklin is said to have coined the phrase that an ounce of prevention is worth a pound of cure. This principle applies also to combating ‘fake news’, propaganda, and misinformation. If people are made aware that they might be misled before the misinformation is presented, they become more resilient to the misinformation. This inoculation comes in a number of different forms, and it is most successful if it refutes an anticipated false argument by exposing the imminent fallacy.
In the same way that a vaccination stimulates the body into generating antibodies by imitating an infection, which can then fight the real disease when an actual infection occurs, psychological inoculation stimulates the generation of counter-arguments that prevent subsequent misinformation from sticking. This project extends the approach to countering extremist messages surrounding Islam. It examines two related research questions:
RQ 1) What is the nature of the misinformation currently available in the UK cultural context that supports islamophobia on the one hand, and Islamist extremism on the other? What is a likely path through the misinformation landscape that a vulnerable person might follow during self-radicalisation? Can reliable rhetorical markers be identified that reveal information to be false and extremist?
RQ 2) Can these markers of misinformation be “reverse engineered” to create inoculating tools that can protect vulnerable people against misinformation and potential radicalisation?
The first research question will be answered by a content analysis of the online misinformation landscape to identify representative pathways that a consumer might follow during self-radicalisation. The second research question will be answered by an experimental study that seeks to inoculate participants (young Muslim and non-Muslim UK residents) against misinformation by training them to recognise the misleading rhetorical tools.
More information can be found here: www.cogsciwa.com
University of Bristol