Skip to content

Transparency and Provision of Information

Article 13 mandates that high-risk AI systems must be accompanied by instructions for use, so that their operation is sufficiently transparent to deployers.

This document equips deployers with the necessary information to ensure that they can operate the system appropriately and interpret its output.

A number of engineering practices can help providers to comply with the requirements of Article 13 by ensuring that the required information is collected in a structured and accessible manner.

Engineering Info

Contents

The information contained in the instructions for use roughly falls into two categories:

There is significant overlap between the information required for the instructions for use and the information required for the technical documentation of high-risk AI systems. While the kind of information is the same, the intended audiences are different. Article 13 requires the instructions of use to be accessible and comprehensible to the deployer. In other words, providers must give some thought to the technical capabilities and knowledge of the deployer before drafting the instructions.

Both documents can benefit from a structured approach to documentation, such as the use of model cards or experiment tracking.

Generic information

All information which is not specific for a single model or may be stable over a long time:

  • Art. 13(3)(a): the identity and contact details of provider
  • Art. 13(3)(b):
    • (i): intended purpose of the system
    • (vi): information about the expected input data schema; relevant information about training/validation data sets
  • Art. 13(3)(e):
    • Computational/hardware resources needed
    • Expected lifetime of the system
    • Necessary maintenance and care measures (including software updates)

Model-specific information

Other parts of the information to be included in the instructions for use refer to characteristics of the actual machine learning model employed in the system.

  • Art. 13(3)(b):
    • (ii): expected level of accuracy, metrics, robustness, and cybersecurity, used for testing and validation of the system; potential circumstances that may impact these characteristics (Art. 15)
    • (iii): known or forseeable circumstances which may lead to a risk (Art. 9); can depend on the model's type
    • (iv): technical characteristics and capabilities of the system relevant to explain its outputs
    • (v): statistics about the system's performance regarding specific (groups of) persons
  • Art. 13(3)(d): Human-oversight measures under Art. 14; technical measures that aid the interpretation of system outputs
  • Art. 13(3)(f): Information about record-keeping mechanisms under Art. 12 (collection, storage, and interpretation)

Legal Disclaimer (click to toggle)

The information provided on this website is for informational purposes only and does not constitute legal advice. The tools, practices, and mappings presented here reflect our interpretation of the EU AI Act and are intended to support understanding and implementation of trustworthy AI principles. Following this guidance does not guarantee compliance with the EU AI Act or any other legal or regulatory framework. We are not affiliated with, nor do we endorse, any of the tools listed on this website.