The Assessment List for Trustworthy Artificial Intelligence
This website contains the Assessment List for Trustworthy AI (ALTAI). ALTAI was developed by the High-Level Expert Group on Artificial Intelligence set up by the European Commission to help assess whether the AI system that is being developed, deployed, procured or used, complies with the seven requirements of Trustworthy AI, as specified in our Ethics Guidelines for Trustworthy AI.
- Human Agency and Oversight.
- Technical Robustness and Safety.
- Privacy and Data Governance.
- Diversity, Non-discrimination and Fairness.
- Societal and Environmental Well-being.
Goal and purpose of ALTAI
ALTAI aims to provide a basis evaluation process for Trustworthy AI self-evaluation. Organisations can draw elements relevant to the particular AI system from ALTAI or add elements to it as they see fit, taking into consideration the sector they operate in. It helps organisations understand what Trustworthy AI is, in particular what risks an AI system might generate. It raises awareness of the potential impact of AI on society, the environment, consumers, workers and citizens (in particular children and people belonging to marginalized groups). It promotes involvement of all relevant stakeholders (within as well as outside of your organisation). It helps gain insight on whether meaningful and appropriate solutions or processes to accomplish adherence to the requirements are already in place (through internal guidelines, governance processes etc.) or need to be put in place.
A trustworthy approach is key to enabling “responsible competitiveness”, by providing the foundation upon which all those using or affected by AI systems can trust that their design, development and use are lawful, ethical and robust. ALTAI helps foster responsible and sustainable AI innovation in Europe. It seeks to make ethics a core pillar for developing a unique approach to AI, one that aims to benefit, empower and protect both individual human flourishing and the common good of society. We believe that this will enable Europe and European organisations to position themselves as global leaders in cutting-edge AI worthy of our individual and collective trust.
ALTAI was developed over two years, from June 2018 to June 2020. You can learn more about the work of the High-Level Expert Group and the feedback received by visinting our piloting phase for ALTAI (second half of 2019).
In 2018, the European Commission opened a process to select a group of experts in Artificial Intelligence (AI) coming from civil society, academia and the industry. As a result, the High-Level Expert Group on Artificial Intelligence (AI HLEG) was created in June 2018 with a total of 52 people from different countries of the European Union (EU). The main objective of this independent group is to provide support in the creation of the European Strategy for Artificial Intelligence with a vision on “ethical, secure and cutting- edge AI”. For this, they have published two documents in this first year of activity: (i) the Ethics Guidelines for Trustworthy AI (the “Guidelines”), along with an assessment list of questions and (ii) the Policy and Investment Recommendations.
Trustworthy AI is defined by three complementary concepts: Lawful AI, Ethical AI and Robust AI. The Guidelines have a human-centric approach on AI and identify 4 ethical principles and 7 requirements that companies should follow in order to achieve trustworthy AI. The document is complemented with a set of questions per each of the 7 requirements, that aim to operationalize the requirements (the “Assessment List”). The 7 requirements are:
- Human Agency and Oversight: fundamental rights, human agency and human oversight.
- Technical Robustness and Safety: resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility.
- Privacy and Data Governance: respect for privacy, quality and integrity of data, access to data.
- Transparency: traceability, explainability, communication.
- Diversity, Non-discrimination and Fairness: avoidance of unfair bias, accessibility and universal design.
- Societal and Environmental Well-being: sustainability and environmental friendliness, social impact, society and democracy.
- Accountability: auditability, minimization and reporting of negative impact, trade-offs and redress.
Starting from June 2019, three main pathways were made available through a piloting process to collect feedback on the Assessment List: 1) an online survey (“quantitative analysis”) 2); a number of in-depth interviews with European organisations (“deep dives”); 3) reporting feedback through the AI Alliance. Based on the feedback collected through these 3 pathways, the Assessment List was revised which resulted in the current document the Trustworthy AI Assessment List (“ALTAI”).
Implementation of feedback from the piloting process
The feedback showed four main areas for improvement: feasibility, content, structure and the relation to existing rules and best practices. The feedback has been taken into account in preparing ALTAI, which resulted in:
- A shorter, more coherent list of questions to reduce and optimize efforts of going through all questions.
- Intuitive and understandable questions.
- Consistency in wording.
- Consistency in phrasing and hierarchy of questions, striking a balance between awareness, assessment, insight and guidance.
- Logic jumps between questions.
- Clear distinction between legal obligations and recommendations.
- Limited qualified adjectives that are open to interpretation.
- Pro-active language.
- Avoidance of overlaps and duplication of questions.
We tested the accessibility of the website using WAVE Web Accessibility Evaluation Tool.
This web-site benefitted from the financial support of Science Foundation Ireland under Grant numbers 12/RC/2289-P2,16/RC/3918, and 18/CRT/6223, which are co-funded under the European Regional Development Fund.