Panel 8: Assessing information in the digital world

Chair: Christina Winters

LT 5&6, Wednesday 20th, 1430-1630

Mark Rouncefield

Reader in Social Informatics | Lancaster University

Policing and Social Media Screening

Session: Panel 8: Assessing information in the digital world (LT5)

The Internet brings many people together, including those with an agenda that includes forms of deviance and criminality. Social media platforms are instrumental in enabling the connection of people with intentions to harm others in various ways. This paper reports initial findings of a small empirical project concerned with police approaches to eliciting and assessing information from social media for information-gathering, the detection of crime and the development of criminal intelligence.  The paper will scope the relationship between criminality and social media use within one police force, report on current practices in dealing with social media and criminality and assess whether there is potential for improvement – both technological and through training and education. The focus is on the social media use of offenders, in order to identify whether early indications of offending might be indicated. Using qualitative ethnographic methods of observation and interview the paper assesses the effectiveness of current social media screening practices by the police: identifying popular online platforms for criminal activity; evaluating the current tools used to screen social media; investigating how social media data can be/is collected, scraped and searched; and identifying opportunities to improve upon social media screening “activities conducted by the police.

Co-authors: Rob Ewin, Sam Maesschalck and Corinne May-Chahal

Keenan Jones

PhD | University of Kent

Who Are You? Exploring the Threat of Automated User Mimicry in Deceiving Online Social-Media Analysis

Session: Panel 8: Assessing information in the digital world (LT5)

Recently, there has been rapid growth in the development of powerful computational systems capable of fluent text generation. These text generators have shown capabilities towards a variety of applications including language translation, question answering, and story writing. However, fears have risen concerning these systems and their potential for abuse – with convincing text generation by automated systems leading to enhanced capabilities towards generating propaganda, fake news, and other forms of misinformation. Our research examines a previously unexplored threat that these text generators may pose: the ability to mimic the writing style of a given social-media user. This capability could allow for malicious agents to generate new texts that appear to be from a targeted user (e.g., a politician or activist), an act which could harm the user’s reputation and deceive others. Through a series of data-driven experiments, we find that automated text generators can mimic a user’s writing style on both Twitter and blogs sufficiently to deceive machine learning-based social-media analysis systems. Our talk will outline this research and highlight a newfound threat to online investigations, the analysis of social-media data, and the establishment of trust online - emphasising the need for approaches capable of combating these new forms of cyber-deception.

Co-authors: Jason R. C. Nurse and Shujun Li, University of Kent

Oli Buckley

Associate Professor | University of East Anglia

Talk to the Machines: Using Chatbots to Enhance Sensitive Disclosures

Session: Panel 8: Assessing information in the digital world (LT5)

The way in which individuals interact with technology is rapidly evolving, as users increasingly expect fast, reliable and accurate information. In order to deliver systems capable of meeting these expectations both businesses and government departments alike are turning to conversational agents (or chatbots). These conversational agents are capable of interacting and engaging with users, answering user queries and even providing advice and guidance as required. This research considered how this technology can be optimised to provide a more effective method of communication, while also focusing on the implicit trust that a user has with a conversational agent. This research investigated the nature of sensitive information and how its context can play a role in its perceived sensitivity. This used a range of experiments to better understand the public’s perceptions of personal information, and how those perceptions relate to the classification of the information. In order to fully understand the use of conversational agents it is essential to properly understand the nature of personal, sensitive information and also their perceived trustworthiness. We examined the different facets of a conversational agent’s humanness, personality and appearance and the effect on an individual’s perceptions and trust.

Co-authors: Duncan Hodges, Cranfield University; Jason Nurse, University of Kent; Helen Dawes, University of Exeter; Natalie Wyer, UEA

Rob Huw Peace

PhD | University of Bath

Does expertise moderate the use of digital trust signals and symbols when assessing online information?

Session: Panel 8: Assessing information in the digital world (LT5)

Incorrectly assessing digital information has many repercussions for users: from downloading malicious code in open-source software repositories, to becoming a victim of misinformation. Study 1 was a systematic review (N = 63 studies) of the digital symbols and signals that communicate trust when assessing digital information. The results suggested trust signals and symbols were grouped into three themes of social proof, verification to reduce variance of risk, and expectancy violation theory. Study 2 (N = 20 participants) was a thematic analysis exploring whether expertise moderates the use of trust signals and symbols in open-source software libraries. Results indicated that differences exist between expert and lay users when utilising trust cues to assess digital information. The implications for these studies are that ways in which people use trust cues create vulnerabilities for malicious actors to exploit through a range of possibilities. Researching which digital trust signals and symbols are utilised by users (when assessing the trustworthiness of digital information) may help to inform how to mitigate said vulnerabilities.

Co-authors: Dr. Laura G.E. Smith and Professor Adam Joinson, University of Bath

Back to top