Date: 27 February 2026
Speaker: Stephen H. Campbell
Chief Technology Officer for the DISARM Foundation, founder of Non-State Threat Intelligence, and advisor to eosedge Legal.

Stephen H. Campbell used the public disorder in Southport in 2024 as a case study to argue that disinformation, amplified by algorithmic incentives and artificial intelligence, has become a primary accelerant and catalytic driver of civil unrest. He contended that this development necessitates the establishment of national early-warning systems, behavioural intelligence frameworks, and stronger regulation of digital platforms. 

You can read a summary of the key points below or download our readout for a full list of definitions. Stephen has also kindly provided a video recording of his session for those that couldn't make it.

Watch on Vimeo

Key Points & Takeaways 

The Information Environment Has Fundamentally Changed 

  • Traditional media gatekeeping (editors, anchors, institutions) has, to a large extent, been replaced by algorithms, influencers, and Big Tech platform owners.
  • Engagement-driven algorithms maximise engagement by playing to confirmation biases and prioritising emotion, outrage, and polarisation, not truth.
  • We now operate in an era of narrative dominance, where facts matter only insofar as they support narratives and disinformation spreads faster and more effectively than correction.
  • AI is dramatically lowering the cost, speed, and scale of disinformation, worsening these dynamics. 

#1: The current information ecosystem structurally amplifies emotionally engaging and polarising narratives that provoke outrage.

Disinformation Acts as an Accelerant for Real-World Violence 

  • The Southport incident demonstrates how a real-world tragedy combined with false attribution rapidly escalated into mass mobilisation and violence.
  • Disinformation transforms local incidents into national unrest.
  • This follows a consistent pattern: Grievance→ Hate → Disinformation → Trigger Event → Mobilisation → Violence

 #2: Disinformation is not merely misleading; it can act as a catalytic accelerant, contributing to civil unrest and public disorder. 

Economic Incentives Drive Disinformation Spread

  • Much of the amplification came from monetised clickbait websites, not state actors.
  • Outrage, fear, and polarisation generate traffic and advertising revenue. 
    Platform algorithms structurally reward this behaviour. 

#3: Disinformation is financially incentivised, meaning systemic harm is embedded within prevailing digital platform business models.

Intelligence and Early Warning Systems Failed 

  • Strategic indicators of rising hate and grievance were visible prior to Southport.
  • Data from civil society groups showed rising hate incidents and increasing polarisation.
  • Yet public disorder risk was assessed as low. 

#4: The intelligence failure was systemic, not operational.

 Fusion Centre (Central Monitoring System) Is Needed 

  • There is no central system/body in the UK responsible for continuous monitoring of disinformation threats, integrating data streams, and producing national early warnings. 

#5: The UK requires a permanent national-level monitoring and fusion centre for disinformation, hate escalation, mobilisation risk, and public disorder forecasting. 

Behavioural Analysis Is More Effective Than Content Moderation 

  • Focusing on behaviours avoids free speech debates and regulatory paralysis.
  • DISARM provides a behaviour-based taxonomy and a standardised way to describe and track manipulative information operations.
  • This enables shared situational awareness, cross-agency intelligence sharing, and predictive modelling based on behavioural fingerprints. 

#6: Tracking behaviours, not content, is the most scalable, legally robust, and operationally viable intervention strategy. 

Disinformation Risk Can Be Quantified 

  • A disinformation impact scale can measure engagement, virality, cross-platform spread, influencer amplification, and calls to action
  • Escalation thresholds could trigger police response, platform takedowns, and emergency coordination. 

#7: Some dimensions of disinformation and mobilisation can be operationalised into measurable indicators and thresholds for early warning purposes. 

Platforms Are Risk Multipliers 

  • Platforms such as X (formerly Twitter) act as algorithmic hate amplifiers and mobilisation accelerators.
  • Current regulation remains insufficient. 

#8: Without strong platform regulation and algorithmic accountability, civil unrest driven by disinformation could continue to scale. 

Lawful-but-Awful Content Remains a Major Gap 

  • Much harmful content does not meet illegality thresholds yet contributes significantly to radicalisation and mobilisation. 

#9: Regulatory frameworks should address lawful but harmful content, not only illegal speech.

Back to top