Skip to content

Inference Log

Compliance Info

Below we map the engineering practice to articles of the AI Act, which benefit from following the practice.

Implementing an inference log will help you in achieving compliance with the following regulations:

  • Art. 12 (Record-Keeping), in particular:
    • Art. 12(1), since the inference log enables the recording of events
    • Art. 12(2), since the inference log allows the identification of potentially harmful situations and facilitates the post-market monitoring
  • Art. 19 (Automatically Generated Logs)
  • Art. 26 (Obligations of Deployers of High-Risk AI Systems), in particular:
    • Art. 26(5) (Monitoring of the AI system's operation by the deployer)
    • Art. 26(6) (Keeping of system logs by the deployer)
  • Art. 72 (Post-Market Monitoring)

Motivation

An inference log is a permanent record of all inferences made by the AI system, including the input and output data, the model used, and relevant additional metadata.

The inference log serves as the basis for monitoring the AI system's operation, ensuring that it behaves as intended and complies with legal and ethical requirements.

Logging of inference data should allow for the reconstruction of the AI system's decision-making process, including the input data, the model used, and the output data. This is essential for understanding the AI system's behavior and for identifying and addressing any issues that may arise.

In addition to these auditability and traceability requirements, the inference log can also be used for other purposes, such as:

  • Model performance monitoring: The inference log can be used to track the performance of the AI system over time, allowing for the identification of any degradation in performance or changes in the input data distribution.
  • Model retraining: The inference log can serve a source of data for retraining the AI system, allowing for continuous improvement of the model.

Implementation Notes

When it comes to implementing an inference log, there are several key considerations to keep in mind:

  • Data structure: The inference log should be designed to accommodate the specific data types and structures used in the AI system. This may include JSON or JSONB fields for input and output data, as well as additional metadata. Evolution of the data schema should be considered, as the AI system and its input and outputs may change over time.
  • Data retention: The inference log should be designed to accommodate the data retention requirements of the AI system (e.g., the retention periods set out in Art. 19 of the AI Act, or any other legal or regulatory requirements, such as under the GDPR).
  • Data protection and privacy: Access to the inference log should be restricted to authorized personnel only, and the data should be protected against unauthorized access or tampering. This may include encryption of sensitive data, as well as access controls and audit trails.
  • Performance and scalability: Since every inference made by the AI system will be logged, the inference log should be designed to handle the foreseeable load (both in terms of data rate and volume) and to support efficient querying and analysis. This may include the use of indexing, partitioning, or other techniques to optimize performance.

Especially for LLM applications, a variety of existing tools exist that provide tracing and logging capabilities.

See the showcase for an example how to implement and integrate an inference log into an AI system.

Key Technologies

Data Observability

  • whylogs, an open-source library for data logging
  • Seldon Core, an open-source platform for deploying and managing machine learning models on Kubernetes, implements a data flow paradigm that facilitates the logging of inference data

Custom Implementation

LLM Tracing and Observability

Note

The field of LLM tracing and observability is rapidly evolving, so this list may not be exhaustive.

  • MLFlow Tracing, LLM tracing functionality is part of the MLflow platform
  • Langfuse, an open-source LLM engineering platform
  • Langtrace, an open-source LLM observability tool based on the OpenTelemetry standard
  • Langchain Tracing
  • Phoenix, an open-source LLM observability tool, based on OpenTelemetry
  • Tracely by Evidently (see above), a LLM application tracing tool based on OpenTelemetry

Cloud ML Platforms

Legal Disclaimer (click to toggle)

The information provided on this website is for informational purposes only and does not constitute legal advice. The tools, practices, and mappings presented here reflect our interpretation of the EU AI Act and are intended to support understanding and implementation of trustworthy AI principles. Following this guidance does not guarantee compliance with the EU AI Act or any other legal or regulatory framework. We are not affiliated with, nor do we endorse, any of the tools listed on this website.