Disinformation: What did we learn from the year of elections?

Assessing exposure to and the effects of disinformation and foreign state information operations.

Date: 16 October 2025
Time: 14:00–15:30
Speaker: Professor Jacob N. Shapiro

Professor Jacob N. Shapiro explored how online political influence efforts, which are coordinated campaigns designed to appear domestic to their targets, have changed since their 2018 peak. Today only China and Russia continue to launch new operations, while most states now focus on domestic influence. He noted that the impact of these campaigns is often overstated, with little evidence of major effects or of large language models (LLMs) amplifying propaganda. Shapiro called for more rigorous interdisciplinary research and multilingual public interest audits to understand global patterns of disinformation.

 

Key Definitions

Online Political Influence Efforts are coordinated campaigns by a state, or the ruling party in an autocracy, to impact one or more specific aspects of politics at home or overseas through media channels, including social media, by producing content designed to appear indigenous to the target state. They are a type of Influence Operations (IOs).

Online Political Influence Efforts occur when one country reaches into another country’s media in ways that mask the operatives’ identity or influence. This is distinct from propaganda. 2018 was a peak year for IOs. By 2023 the only countries starting new campaigns were China and Russia.  

Coordinated Inauthentic Behaviour (CIB): coordinated efforts to manipulate public debate for a strategic goal, where fake accounts are central to the operation. Facebook disables CIBs.

 

Key points and takeaways

  • At least to the extent systematic assessment is possible, there is no consistent recapitulation of Russian propaganda by LLMs (in response to questions derived from Russian propaganda itself). This demonstrates that perceptions of threats in headline news are not materialising as suggested.  
  • Where there is a low volume of evidence GPT is likely to give an equivocal response. If LLMs are producing equivocal responses, they are harder to manipulate.  
  • The KPIs used by Russia operatives are ineffective. They inflate the impact of Russian IOs. There is also a low level of competency.  
  • Most states have given up on online IOs as a tool for overseas action, though many use IOs at home (including democracies such as Mexico).
  • Disinformation is endemic, not pandemic. Europe was the only region in which interference was identified throughout (during EU elections). 
  • Direct exposure to IOs is tiny in US, if these IOs are working it must be through other transmission mechanisms.
  • Study of political persuasion suggests small short-term impacts possible on less prominent issues.
  • Little direct evidence of effects, but operations in conflict settings almost completely unstudied.
  • Just because an action or effect is possible (e.g. LLMs implicated in IOs), it does not mean that this will be impactful. Don’t ‘buy the hype’.

 

Priority themes for future work

  • Interdisciplinary teams: to bring together subject specific knowledge and generic influence evaluation skills.  
  • Non-broadcast elements of information diets: information is being mediated by agents and becoming more bespoke. Research protocols should facilitate data donation while preserving participant privacy and security. This could include using data portability rights under GDPR or services such as Comscore.
  • Scaled multilingual public interest audits: are needed to analyse different kinds of LLMs. Existing work is mainly proprietary research for product development, or one-off research by computer scientists that this is highly specialised and aimed at an academic audience.  
  • Potential impacts outside of US and EU: more research is needed on global impacts. 
Back to top