Advertisement

Brussels Releases The European Union General-Purpose AI Code of Practice

Lessons from observing the EU AI Act’s Code of Practice development

The author wrote in prior exclusive Stankevicius articles on The European Union’s AI Act Goes Live: What Higher Education Institutions Need to Know, and The European AI Act 2024: A Threat to International Academic Collaboration for Higher Education Institutions? Much has happened since those articles. The finalized voluntary The General-Purpose AI (GPAI) Code of Practice for general-purpose AI models was released in Brussels on 10 July 2025 by the European Union, translating the AI Act’s broad obligations into specific standards.

“After the Code is endorsed by Member States and the Commission, AI model providers who voluntarily sign it can show they comply with the AI Act by adhering the Code. This will reduce their administrative burden and give them more legal certainty than if they proved compliance through other methods.”

Official Partner

GPAI

This development is expected to strongly encourage frontier AI developers to adopt demonstrably safer practices. The 10^25 threshold encompasses all current frontier models and remains flexible enough to adjust as the technology progresses. According to EpochAI, findings reveal a rapid acceleration in high-compute AI models. In 2017, only two models surpassed the 10²³ FLOP threshold in training compute. By 2020, that number doubled to four; by 2022, it rose sharply to 36. As of 2024, the dataset includes 201 models confirmed to exceed 10²³ FLOP, with an additional 126 likely surpassing this threshold based on available indicators. This growth reflects both increased AI investment and the declining cost of high-performance training hardware, making large-scale model development increasingly accessible. (EpochAI

The Working Group Structure

The author had the privilege of observing GPAI development process but must adhere to the Chatham House rules, which are used around the world to encourage inclusive and open dialogue in meetings. The GPAI development launched in July 2024 with over 1,400 participants from industry, academia, civil society, and EU Member States. Group work started last year November 18, 2024, when the author joined Working Group 4: Internal risk management and governance of General-purpose AI providers meeting.

Working Group 4 focused specifically on internal risk management and governance of General-purpose AI providers. Chair Marietje Schaake, a Fellow at Stanford’s Cyber Policy Center and former Member of European Parliament, led the group alongside Vice-Chairs Markus Anderljung from the Centre for the Governance of AI and Anka Reuel, a Computer Science Ph.D. candidate at Stanford University. Working group participants could submit questions in advance and express interest in speaking during audience presentations.

The Questions We Grappled With

In our virtual meetings, specific technical questions explored governance measures. We discussed whether measures 15, 16, and 17 for providers of GPAI models with systemic risks should be updated to be more applicable to startups or Small and Medium-sized Enterprises. Together, we examined whether measure 17 should be adapted for open-source or other release strategies. Key governance questions included: What specific metrics or criteria should gauge whether an AI model aligns with international human rights law, ethical standards and societal good? How should internal governance structures for overseeing risk management in AI development work, and who should be held accountable? Also explored were procedural matters: What procedures should detect and respond to incidents when a model’s behavior causes unanticipated harm? How often should model providers review and update risk management practices?

Observations and Reflections

Observing this cross-disciplinary process as a working group member revealed the complexity of translating high-level legal obligations into practical governance measures. The stakeholder questions demonstrated genuine uncertainty about implementation: How should the Code ensure that GPAI providers use it? How should compliance be ensured? What is the process? The global nature of AI development created additional complexity. Participants noted that displacement of humans in the workplace and deterioration of skills were not included in the taxonomy of systemic risk. The voluntary nature of the Code raised questions about effectiveness. While providers who sign benefit from reduced administrative burden and increased legal certainty, ensuring broad adoption remains an open challenge.

The author experienced eight month journey an important manifestation in democratic governance of transformative technologies. Whether it succeeds will depend on continued stakeholder engagement, adaptive implementation, and the willingness of both regulators and industry to maintain meaningful transdisciplinary dialogues as AI capabilities continue to evolve.

Transparency, Copyright, Safety and Security

After the GPAI  is endorsed by Member States and the Commission, AI model providers who voluntarily sign it can show they comply with the AI Act by adhering to the Code. 

The Act helps providers ensure sufficient transparency. This enables providers to integrate these models into their products. The Transparency chapter of the Code offers a user-friendly Model Documentation Form, which allows providers to easily document the necessary information in a single place. The Copyright chapter of the Code offers providers practical solutions to put in place a policy complying with EU copyright law. Some general-purpose AI models could carry systemic risks, such as risks to fundamental rights and safety, including lowering barriers for the development of chemical or biological weapons, or risks related to loss of control over the model. The AI Act mandates that model providers assess and mitigate these systemic risks. The Safety and Security chapter contains relevant state-of-the-art practices for systemic risk management.

European Commission – Press release, 2025

In Shaping Europe’s digital future readers can find detailed information on each section:

  • “The  Transparency chapter offers a user-friendly Model Documentation Form which allows providers to easily document the information necessary to comply with the AI Act obligation to on model providers to ensure sufficient transparency.
  • The Copyright chapter offers providers practical solutions to meet the AI Act’s obligation to put in place a policy to comply with EU copyright law.
  • The Safety and Security chapter outlines concrete state-of-the-art practices for managing systemic risks, i.e. risks from the most advanced models. Providers can rely on this chapter to comply with the AI Act obligations for providers of general-purpose AI models with systemic risk.” (The General-Purpose AI Code of Practice, 2025)

However, from August 2, 2026 onwards, the Commission will enforce full compliance with all obligations for providers of general-purpose AI models with fines. Models placed on the market before August 2, 2025 must comply with the AI Act obligations by August 2, 2027. Providers must notify the AI Office of general-purpose AI models with systemic risk without delay. The AI Office will offer collaborative support in the first year from August 2, 2025. If providers who adhere to the Code do not fully implement all commitments immediately after signing, the AI Office will not consider them to have broken their commitments under the Code. Instead, the AI Office will consider them to act in good faith and will collaborate to find ways to ensure full compliance.

Papadatos in How the EU’s Code of Practice Advances AI Safety (2025) argues that the Code provides a powerful incentive to push frontier developers toward measurably safer practices through required risk analysis. This analysis must include:

  • Model-independent information, including forecasting
  • State-of-the-art model evaluations
  • Risk modeling

Companies must follow a clear cycle: identify risks, analyze current risk levels, determine acceptability, and implement mitigations until risks reach acceptable levels. The Code mandates documentation through two key instruments: a Framework outlining overall safety policies and Model Reports showing Framework application for each specific model. Companies must predefine acceptable risk levels and maintain risks below these predetermined thresholds.

Riveting reading on frontier AI safety policies for Anthropic, OpenAI, Google DeepMind, Magic, Naver, Meta, G42, Cohere, Microsoft, Amazon, xAI, and Nvidia can be found in the March 2025 report by METR (Model Evaluation & Threat Research):  Common Elements of Frontier AI Safety Policies.

The Virtue of Practical Wisdom

Observing Working Group 4’s deliberations brought to mind Aristotle’s concept of phronesis, or practical wisdom, from the Nicomachean Ethics. Aristotle argued that virtue lies not in rigid adherence to abstract principles, but in the careful application of wisdom to particular circumstances. The challenge is finding the mean between extremes, adapting universal principles to specific contexts.

For the author, the Code of Practice embodies this Aristotelian approach. Rather than imposing inflexible rules, it seeks to cultivate good judgment in AI governance. The collaborative first-year implementation period reflects recognition that virtue in technology governance requires learning through practice, not just compliance with predetermined standards. The stakeholder questions revealed this tension between universal principles and particular applications. How do you apply human rights standards to systems that process information in ways humans cannot comprehend? How do you balance innovation with precaution when the stakes include societal welfare?

Like Aristotle’s virtuous person who develops good character through habituation, the Code assumes that responsible AI governance emerges through sustained practice of good judgment rather than mere rule-following. The voluntary nature of the Code reflects confidence that organizations will choose virtuous action when provided with clear guidance and appropriate incentives.

Dr. Jasmin (Bey) Cowin, a columnist for Stankevicius, employs the ethical framework of Nicomachean Ethics to examine how AI and emerging technologies shape human potential. Her analysis explores the risks and opportunities that arise from tech trends, offering personal perspectives on the interplay between innovation and ethical values. Connect with her on LinkedIn.

author avatar
Dr. Jasmin Cowin

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Support Independent Journalism

X