XEQ Scale evaluation tool

Author:

Robert Gordon University

Type of Resource:

Toolkit

Area:

Governance and Regulation, Users

Description:

Organisations are leveraging AI for more and more complex tasks. It is important that the outcomes of AI-based systems can be explained to meet UK and European legislation on an individual’s right to understand how their data is being used. However, evaluating the quality of machine generated explanations is difficult, particularly when the system has (a) more than one explainer providing explanations or (b) multiple different stakeholders using the system who have different expertise. 

The XEQ Scale is a user-centred survey tool for the evaluation of explanation experiences. It is a psychometrically validated survey tool, formed of 18 statements where responses can range from strongly agree to strongly disagree on a 5-point Likert Scale. These statements are linked with ‘evaluation factors’ – 4 high-level concepts that the XEQ Scale helps to investigate (learning, fulfilment, utility and engagement). The scale helps form a detailed analysis by breaking down analysis of the system into separate stakeholder groups and evaluation factors.

The XEQ scale ensures that organisations explanations are meeting users’ needs. This can provide useful insights for improving the system, comparing against competitors, or providing evidence for compliance with relevant legislation.

Details on the XEQ Scale, a step-by-step guide, downloadable and online versions, and data analysis tools are available at the link below.

Next
Next

Capsules of AI Knowledge video series