Human Engagement Through Artificial / Augmented Intelligence

This project reviews how augmented intelligence aids human decision-makers with the aim to produce specific use-cases examining potential break-down, and opportunities for repair, in human-AI engagement.

‘Augmented intelligence’ uses Artificial Intelligence / Machine Learning (AI / ML) to extend human cognitive ability. An experienced analyst plus augmented intelligence ought to produce a performance that is superior to either entity alone. The capabilities of the AI / ML for exploring vast data resources and discovering patterns exceeds that of the human, whereas the human’s expertise will allow insight into unusual or unfamiliar patterns. Thus, there is a need to ensure collaboration in pursuit of sense-making.

This requires that the AI is able to explain itself to the human, that the human can provide an explanation to the AI, and that human-AI engagement progresses through the establishment and maintenance of common ground. This engagement occurs in a ‘system’; that is, the cooperation between human and AI / ML is one interaction among many, e.g. humans cooperate with other humans, humans programme the AI / ML, humans could be involved in selecting and preparing the data that the algorithms use, the AI could interact with other algorithms etc.

Not only is it important that humans and automation establish and use common ground, but also that humans who communicate through automation have this. We ask how common ground might break down in order to explore consequences and mitigations.

Artificial Intelligence (AI) and Machine Learning (ML) approaches work most effectively (at present) when the domain in which they are operating is well-bounded. That is, when the information that is available is consistent, when the ‘rules’ to act on the domain are well-specified and when the reward function can be formulated clearly.

Consequently, the experience of using AI / ML to augment human intelligence can depend on the nature of the domain in which it is applied.  For example, if AI / ML is used to support decision-making, then one might ask what would count as ‘evidence’ and how the ‘authority’ of a decision-maker is agreed between human and AI / ML.

The objectives for this 12-month project are:

  1. Review how augmented intelligence aids human decision-makers.
  2. Categorise ‘Degree of Augmented Intelligence’ in terms of impact on human performance.
  3. Produce a theoretical model of common ground, how this impacts on explainability, and the ways in which intention is attributed or presented by actors in human-AI engagement.
  4. Produce specific use-cases examining potential break-down, and opportunities for repair, in human-AI engagement.
  5. Produce short, industry-specific guides.

Project resources

Back to top