Foundations of a Trustworthiness Risk Assessment Framework for AI Systems

Artificial Intelligence Secure Society

Project Vision

The project aims to lay the foundations for a trustworthiness risk assessment framework that can describe and automatically assess trustworthiness risks associated with AI systems. Trustworthiness risk assessment is expected to form an integral element within the future governance and management of AI systems, and as such is expected to be a key component within the blueprint for implementation of trustworthy AI.

Project Objectives

We will investigate how to extend ISO 27005 information security risk management concepts and processes (identification of key assets; how they are affected by threats that can compromise them, which leads to risk; controlling measures to reduce the likelihood of a threat or mitigate the impact of a risk) to trustworthiness risk management. To address the complexity of risk modelling and assessment, we will use a system security modelling (SSM) platform to capture trustworthiness risk knowledge and automate risk assessment. The SSM implements ISO 27005, is designed to model risks within socio-technical systems and has been applied to a wide range of uses cases, including data protection compliance assessment and privacy risk assessment.

Our approach to trustworthiness risk assessment will be to model types of AI assets (SSM core ontology ways to model data and process) and the different consequences - virtuous attributes - (e.g., explainability, transparency, data quality, etc.) contributing to or reducing trustworthiness. We will then determine how these virtuous attributes are vulnerable or how they can be threatened leading to loss of trustworthiness. Finally, we will determine controls and control strategies that reduce vulnerabilities to loss of trustworthiness. We will demonstrate the principles for a representative set of AI systems using the SSM platform.

IT Innovation's Role

IT Innovation Centre Logo

IT Innovation provides overall leadership for the project extending the SSM risk modelling tool towards trustworthiness modelling of AI via a risk management approach

F-TRIADS brings together IT Innovation's extensive experience in socio-technical risk modelling and builds on the Spyderisk methodology and tools.

Project Fact Sheet

The F-Traids project is a 5 month project funded by the EC Horizon Europe programme.

Coordinator: IT Innovation Centre, University of Southampton
Website: https://tas.ac.uk/research-projects-2022-23/foundations-of-a-trustworthiness-risk-assessment-framework-for-ai-systems-f-triads/
More information: https://tas.ac.uk/

TAS Hub Logo This project has received funding from the UKRI Trustworthy Autonomous Systems Hub.

Projects in Similar Areas

Read More


Secure Society, Artificial Intelligence, Big Data, Smart Cities, Environment
Read More


Secure Society, Health & Wellbeing, Artificial Intelligence
Read More


Secure Society, Health & Wellbeing, Artificial Intelligence
Read More


Secure Society, Artificial Intelligence, Health & Wellbeing, Transport