Understanding the mechanisms of enforcement within the EU Artificial Intelligence Act is important as its implementation unfolds. A significant tool in this regard is the general-purpose AI (GPAI) Code of Practice, which is currently under development by the AI Office and various stakeholders.
The purpose of the Code of Practice, introduced in Article 56 of the AI Act, is to provide guidelines for compliance with the Act. It is seen as a crucial tool for ensuring adherence to EU AI Act obligations, particularly in the interim period. This period is between when the obligations for General Purpose AI (GPAI) model providers come into effect (scheduled for August 2025) and the eventual adoption of harmonised European standards for GPAI models, which is anticipated later (August 2027 or potentially later).
While the Code of Practice is not legally binding, GPAI model providers can choose to adhere to it to demonstrate compliance with their obligations under the Act. Providers who do not comply with the Code may need to prove their compliance with the relevant obligations through alternative means, which could be more burdensome and time-consuming. The contents of the Code are also considered likely to form the basis for the future GPAI standards.
The necessity for this Code stems from the different timelines for regulatory enforcement and technical standardisation. Obligations for GPAI models, outlined in Articles 50–55 of the AI Act, become enforceable twelve months after the Act's entry into force (August 2, 2025). However, the European standardisation process, often involving bodies like CEN and CENELEC, can take up to three years or potentially longer, especially for technical standards like those required for GPAI, particularly if coordinated with international standards. The multi-stakeholder and consensus-building nature of standards setting further extends this process. The Code of Practice is designed as a placeholder mode of compliance to bridge this gap.
The Code of Practice specifically serves as a way to demonstrate compliance with obligations found in Articles 53 and 55 of the AI Act. These obligations for GPAI model providers include:
Providing technical documentation to the AI Office and National Competent Authorities.
Furnishing relevant information to downstream providers who intend to integrate the GPAI model into their own AI or GPAI system, such as details about the model's capabilities and limitations.
Supplying summaries of the training data used.
Having policies in place for complying with existing Union copyright law.
For GPAI models identified as having systemic risk, which is presumed if a model was trained using cumulative computation greater than 10^25 floating point operations (FLOPs), the Code addresses further obligations:
Implementing state-of-the-art model evaluations.
Conducting risk assessment and mitigation.
Reporting serious incidents, including details on corrective measures taken.
Ensuring adequate cybersecurity protection.
Based on the third draft published on March 11, the Code is structured into three main sections:
The Transparency Section applies to all GPAI model providers. Signatories commit to creating and maintaining model documentation, providing information to downstream providers and the AI Office upon request, and ensuring the quality and security of this information. An interactive Model Documentation Form is included. This section does not apply to open-source AI models unless they pose systemic risk.
The Copyright Section also applies to all GPAI model providers. Commitments include developing and implementing a copyright policy to ensure lawful access to training data (only reproducing and extracting lawfully accessible content when crawling), complying with rights reservations, mitigating the risk of producing copyright-infringing output, designating a point of contact, and allowing for the submission of complaints.
The Safety and Security Section applies only to providers of GPAI models with systemic risk. Signatories must adopt and implement a Safety and Security Framework covering risk assessment, mitigation, and governance throughout the model lifecycle to keep systemic risks within acceptable levels. This involves assessing and mitigating risks, identifying and analysing systemic risks, implementing technical safety and security mitigations (including against insider threats), monitoring serious incidents, adopting non-retaliation protections, notifying the AI Office at specified milestones, and reporting implementation through Safety and Security Model Reports. Independent external assessment is required unless exceptions apply.
The Code is relevant to providers of 'General-purpose AI models,' which are defined as models exhibiting significant generality, capable of performing a wide range of distinct tasks, and integrable into various downstream systems. This definition applies regardless of the release method, unless the model is solely for research or prototyping before being placed on the market. A GPAI model provider is any entity that develops or has developed a GPAI model and places it on the market under its name.
The 'systemic risk' designation for GPAI models relates to their potential for high-impact capabilities that could significantly affect the EU market, potentially causing widespread negative effects on public health, safety, security, fundamental rights, or society. As noted, models are presumed to have these capabilities if trained using computation greater than 10^25 FLOPs. This is a rebuttable presumption. Currently, it is estimated that only a small number of providers globally have models that consistently surpass this threshold. The Commission is expected to provide further guidance on these definitions and obligations, including how model modifications like fine-tuning might affect obligations.
The drafting process for the Code has been ongoing since October 2024 and is required to be finished by May 1, 2025. This timeline allows for review and potential approval by the Commission. The process involves a multi-stakeholder approach with over a thousand participants from diverse backgrounds, including providers, deployers, trade associations, academics, experts, and civil society organizations. This extensive engagement is structured around four working groups aligned with the GPAI section of the AI Act, covering Transparency and copyright, Risk identification and assessment for systemic risks, Risk mitigation for systemic risks, and Internal risk management and governance for providers. Plenary sessions and dedicated workshops have also taken place to facilitate the drafting. The final version is expected in May 2025.
Share this post