Host Labs and Projects

Below is a selection of potential host institutions and project ideas to guide prospective applicants. Applicants should review these options and identify those that align with their interests and experience. It is the applicant's responsibility to contact prospective hosts directly to discuss suitability and confirm mutual interest.

Once a lab/project has agreed in principle to host the applicant, the applicant must complete the NABS+ Qualtrics form. We will not approach a lab/project on an applicant's behalf.

Please note that this list is not exhaustive. Applicants are welcome to identify and approach alternative UK-based labs or research groups independently, provided the proposed Fellowship meets the eligibility and remit requirements outlined on the main Visiting Fellowships page.

Call for Host Labs and Projects

We are currently seeking expressions of interest from labs willing to host a visiting fellow, as well as proposals for specific projects or studies that would benefit from additional research support.

To ensure sufficient time for prospective Visiting Fellows to explore opportunities, offers for inclusion must be received by the beginning of April.

If your lab or project would like to be included on the hosts page, please read the details of the Call Summary.


University of Bath

Bath Institute for Digital Security and Behaviour

Applicants should read about our remit and the expertise of our members here: https://www.idsb.ac.uk/, identify potential members who might act as hosts, and get in touch to discuss and refine their application.

Duration: Up to nine months.

Emails with expression of interest and a draft project idea (max 300 words) should be sent to Sarah Parry: scp40@bath.ac.uk


University College London

The Counter-Terrorism Research Group (CTRG)

CTRG is an internationally recognised research group specialising in the empirical study of terrorism, targeted violence, online harms, and emerging threats. Our work combines behavioural and social science with advanced analytical approaches to develop science to inform P/CVE policy and practice.

What CTRG offers Visiting Fellows

Visiting Fellows hosted by CTRG will have the opportunity to:

  • Collaborate on cutting-edge research
  • Work with closed-source data to address real-world challenges
  • Engage directly with practitioner and policy teams with whom we have established relationships
  • Apply behavioural analytics to operationally relevant problems, including contributing to one or more of the following projects:
    • Understanding how mental health and neurodevelopmental conditions intersect with susceptibility to violent extremism
    • Examining how behaviours, communications, and language may (or may not) predict risk among fixated individuals
    • Exploring intersections between violence against women and girls (VAWG) and violent extremism

Please note, most projects require SC clearance so a candidate with existing SC would be beneficial.

Duration: Up to nine months.

Emails with expression of interest and a draft project idea (max 300 words) should be sent to Caitlin Clemmow: caitlin.clemmow@ucl.ac.uk


Lancaster University: Forensic Linguistics (FACTOR Lab)

Speech LLM Voice Descriptions 

Brief: Previously, speech LLMS (SLLMs) have been used to "silently" label accent/dialect, transcribe speech, identify speakers, etc. without producing any overt explanation of how those conclusions have been reached. However, SLLMs are now capable of outputting unstructured descriptions which may provide an avenue of explainability. Do these descriptions match the labels they have more traditionally produced? Do they even/also match the voice sample under analysis? If there are differences, what are these, and why? 

Applications/deliverables: Security/intelligence sectors are required to make broad assessments of voices at scale and at speed, but whilst these "traditional" labels are useful for quick analyses, their lack of explainability means that they do not support evidence-based decision-making pipelines or risk ownership. This fellowship would look into the plausibility of whether the same systems (SLLMs) are capable of explaining their own reasoning. 

Candidate: Someone within a relevant organisation and/or an academic in another lab interested/involved in the explainability of AI systems, particularly in high-stakes intelligence/security contexts. 

Duration: Six to nine months.

Calibrating Trust in AI: measuring human predictions of speaker recognition system performance 

Brief: Speaker recognition systems – artificial intelligence to identify a given speaker – are widely deployed across intelligence, security, and adjacent contexts. However, the performance of these systems on different voices and data types is highly variable. When asked to assess what answer a speaker recognition system will give, how accurate is the human? Do they under- or overestimate its capabilities? Are they too trusting of the technology, or not trusting enough? 

Applications/deliverables: Gauging and calibrating the degree of trust that humans (and particularly intelligence and security practitioners) have in speaker recognition systems, and whether this is too much or too little. This could be extended to also incorporate speech recognition systems (transcription systems), AI detection technologies, etc.

Candidate: An individual within a relevant organisation and/or an academic in another lab interested/involved in how much people/practitioners (mis)trust AI, especially in high-stakes contexts, and how much of that (mis)trust is misplaced. 

Duration: Three to nine months. 

Bot or Not: From public resilience to professional upskilling 

Brief: The Bot or Not suite of perception tests have an extraordinarily successful engagement record, including most of the civil service, the NCSC, the UKHSA, the Met Office, and far beyond. These deliberately anonymous, public-oriented tests balance the fun of a "low-friction", scored online quiz with implicit resilience-building through raising awareness of the advancements of generative AI. In so doing, they have provided an exceptional baseline dataset of the public's general capacity for detecting AI-generated texts, speech, and even music, along with their reasoning for their decisions. However, these tests also present a unique opportunity for those working in specialised security/intelligence and adjacent areas. 

Applications/deliverables: Through the original tests, subsequent tailored versions of the tests, and comparisons with the existing baseline, this placement would look to establish (a) how well practitioners in a current relevant contexts perform versus the established baselines, and (b) the optimal training to maximally upskill those same practitioners with a particular eye on time/cost investments and diminishing returns. From this, we envision developing flexible CPD materials to extend to other relevant practitioner groups. 

Candidate: Someone within a relevant organisation interested/involved in the potential threats posed by generative AI.

Duration: Six to nine months. 

Emails with expression of interest for any of the above fellowships should be sent to Prof Claire Hardaker (c.hardaker@lancaster.ac.uk) and Dr Georgina Brown (g.brown5@lancaster.ac.uk)


Lancaster University: The Nightingale Lab

Project focus: The Nightingale Lab conducts socio-technical research to explore and mitigate harms caused by emerging technology, e.g., the use of Artificial Intelligence (AI) to create sexual digital forgeries (SDF), which is a form of gender-based abuse. We combine social science, data science, and computational approaches to examine how new technologies are being used to commit gender-based harms along with innovative approaches to prevent such behaviour. Much of our work explores the harms and challenges for those who have been targeted in SDF and seeks ways to better support these individuals. We’d welcome visiting fellows who are interested in proposing a project linked to this topic. In addition to running your own project, you will also have the opportunity to collaborate with lab members on existing work.

We are particularly interested in conducting a perpetration mapping exercise to better understand the motivations and reasoning for creating and sharing SDF. This piece of work might also include a risk mapping exercise in which we explore how SDF intersects with other types of criminality.

In addition to exploring tech-facilitated gender-based abuse, Dr Sophie Nightingale (Nightingale Lab leader and Early Career Researcher (ECR) lead for NABS+), along with the current members of the ECR Steering Group, is looking for a visiting fellow to join the lab to work on projects relating to improving ECR experiences and opportunities. One project which the fellow will be asked to contribute to is the development of an online survey which aims to capture perceptions, concerns and experiences amongst UK and International ECRs in the field of analytical behavioural science in security and defence, regarding the networks, support and resources available to them in support of their professional development and career trajectory. Alongside this, you are encouraged to bring your own project and activity ideas for how to improve ECR experiences in this field. Our hope is that this work to understand and improve the experiences of Early Career Researchers (ECRs) in the field of Security and Defence could be carried out in conjunction with any work relating to SDF.

Host lab: The successful candidate will be hosted within the Nightingale Lab which currently has eight team members working in a collaborative and supportive environment. Through a placement within the lab, you will gain first-hand experience of our research on mitigating AI harms as well as opportunities for networking with others in the lab and wider university. You will have opportunities to join weekly meetings to benefit from the wide range of expertise of the lab members. The successful candidate will be assigned a mentor who will help to maximise the learning opportunities within this placement.

We also have strong links with partner organisations such as Lancashire Police and the Office of the Police and Crime Commissioner’s Office for Lancashire. Where possible, we will encourage interaction with these partners to demonstrate the real-world impact of the work.

Candidate: You will be highly motivated with a keen interest in mitigating gender-based harms caused by emerging technologies as well as, ideally, an interest in improving the experience of ECRs working in security and defence related areas.

In our lab, we adopt a wide range of socio-technical approaches and therefore welcome candidates with quant, qual, mixed methods, computational, or data science expertise.

You should have a clear intention of how they would like to make the most of the internship experience to enhance their own development.

Experience of experimental design and online survey development (e.g., via Qualtrics, Psychopy) as well as quantitative research skills, for example experience in data analysis, would be beneficial.

Duration: up to 9 months

Emails with expression of interest, an up-to-date CV, and a draft project idea (max 300 words) should be sent to Dr Sophie Nightingale (s.nightingale1@lancaster.ac.uk).

Back to top