Advertisement
Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
business.stankeviciusmgm.com

The European AI Act 2024: A Threat to International Academic Collaboration for Higher Education Institutions?

Fundamentals of The European AI Act 2024

On July 12, 2024, the EU issued Regulation (EU) 2024/1689, known as the EU AI Act, marking a significant milestone as the first extensive AI regulation globally. This regulation emerged over three years after its initial draft by the EU Commission in April 2021. The AI Act establishes a broad, sector-independent regulatory framework designed to set the standard for AI governance within the EU. This framework is expected to influence both local and global companies and global legislative approaches. Importantly, the AI Act’s scope extends beyond the EU; it applies to entities worldwide that might not be physically present in any EU member state but engage with the EU market. This represents the culmination of a lengthy legislative process, with the Act set to be enforced starting on August 2, 2024. Higher Education Institutions (HEIs) should familiarize themselves with the implications of this regulation, understand their responsibilities, and prepare for compliance by the designated date. This series of blogs exclusively written for Stankevicius by the author aims to highlight the critical aspects of the AI Act that HEI organizations need to grasp to ensure they are ready for the upcoming regulatory changes, specifying important deadlines for meeting these new requirements.

Advertisement

Under the AI Act, numerous technologies typically linked with AI are not classified within the defined category of an “AI System,” and thus are not covered by the regulation. Organizations must grasp the Act’s specific definition of an “AI System.” The concept of an “AI System” has evolved through multiple iterations since the original introduction of the text. This evolution has been particularly influenced by the rising prominence and enhanced capabilities of generative AI, large language models (LLMs), and other cutting-edge technologies, reflecting their rapid development and widespread adoption.

In the published version of the EU AI Act, the definition of AI System is defined as:

“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

3(1) EU AI Act

The AI Act employs a risk-based framework to regulate AI systems, grounded in a broader EU framework designed to identify and manage AI-related risks. According to the supplemental publication, The European Union Artificial Intelligence Act the AI Act will unify how AI is regulated across the single market of the 27 EU Member States. It also has important extraterritorial implications, as it covers all AI systems impacting people in the EU, regardless of where systems are developed or deployed. (p.2)

There are four key objectives:

(i) To ensure that AI systems placed on the EU market are safe and respect fundamental rights

(ii) To ensure legal certainty to facilitate investment and innovation in AI

(iii) To enhance governance and effective enforcement of EU law on fundamental rights and safety requirements applicable to AI systems

(iv) To facilitate the development of a single market for lawful, safe and trustworthy AI applications, and prevent market fragmentation

The framework identifies four categories of risk:

  1. Unacceptable Risk: A classification reserved for AI Systems or uses that pose significant risk of harm and unacceptable risks to individuals and their rights (eg, a system designed to manipulate elderly members of society).
  2. High Risk: A classification for AI Systems and uses which fall within specific High-Risk categories of use cases (eg, in the course of employment) and system types (eg, those which require assessment under existing regulation such as the Medical Devices Regulation) and are not otherwise exempted or prohibited.
  3. Limited Risk: A classification for AI Systems or uses that do not fall within the High-Risk category but do pose certain transparency risks and requirements not associated with Minimal Risk systems (such as chatbots).
  4. Minimal Risk: A classification for AI Systems or uses with minimal impact on individuals and their rights (eg, spam filters), and are largely unregulated by the AI Act directly (and are instead regulated by other EU-wide and national legislation).

2. The EU AI Act Framework

The EU AI Act discusses various aspects of AI systems and regulations but does not contain a standalone definition. Article 3(1) of the regulation provides the following definition:

“AI systems” can be easily deployed in a large variety of sectors of the economy and many parts of society, including across borders, and can easily circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that AI is trustworthy and safe and is developed and used following fundamental rights obligations.

EU AI Act 3

Diverging national rules may lead to the fragmentation of the internal market and may decrease legal certainty for operators that develop, import or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured to achieve trustworthy AI, while divergences hampering the free circulation, innovation, deployment and uptake of AI systems and related products and services within the internal market should be prevented by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU).  However, this is a document in flux as the author discovers on her mission to make sense of the various online versions and proposals of the European Parliament for the amendment of the Treaties.

The EU AI Act consists of 12 main titles. Each title contains a set of Articles.

  • Chapter I: General Provisions
  • Chapter II: Prohibited Artificial Intelligence Practices
  • Chapter III: High-Risk AI System
  • Chapter IV: Transparency Obligations for Providers and Deployers of Certain AI Systems and GPAI Models
  • Chapter V: General Purpose AI Models
  • Chapter VI: Measures in Support of Innovation
  • Chapter VII: Governance
  • Chapter VIII: EU Database for High-Risk AI Systems
  • Chapter IX: Post-Market Monitoring, Information Sharing, Market Surveillance
  • Chapter X: Codes of Conduct and Guidelines
  • Chapter XI: Delegation of Power and Committee Procedure
  • Chapter XII: Confidentiality and Penalties
  • Chapter XIII: Final Provisions

In addition, 13 Annexes provide supplementary information alongside the Regulation.

  • Annex I: List of Union Harmonization Legislation
  • Annex II: List of Criminal Offences
  • Annex III: High-Risk AI Systems
  • Annex IV: Technical Documentation
  • Annex V: EU Declaration of Conformity
  • Annex VI: Conformity Assessment Procedure Based on Internal Control
  • Annex VII: Conformity Based on Assessment of Quality Management System and Assessment of Technical Documentation
  • Annex VIII: Information to be Submitted upon the Registration of High-Risk AI Systems (Article 49)
  • Annex IX: Information to be Submitted upon the Registration of High-Risk AI Systems (Annex III / Article 60)
  • Annex X: Union Legislation on Large-Scale IT Systems in the Area of Freedom, Security and Justice
  • Annex XI: Technical Documentation Referred to in Article 53(1a)
  • Annex XII: Transparency Information Referred to in Article 53(1b)
  • Annex XIII: Criteria for the Designation of General Purpose AI models with Systemic Risk

The Commission’s ability to update the Act’s provisions as technology progresses will further influence the trajectory of AI in these domains over time.

3. Impact Projection on Collaboration Between Non-Schengen and Schengen Higher Education Institutions

The EU AI Act emphasizes creating a consistent regulatory framework across the EU to ensure trustworthy and safe AI development and usage. Non-Schengen institutions will need to adhere to the Act’s stringent requirements, particularly regarding data protection, transparency, and accountability. The Act aims to ‘harmonize’ AI regulations, reducing legal uncertainties and ensuring a high level of protection across the EU. ​

The forthcoming enforcement of standardized and regulated environments for AI will affect all HEIs, not only those located within the EU but also those interacting with European students. This widespread impact calls for an in-depth discussion and examination of potential consequences, underscoring the need for these institutions to prepare for changes that may affect their operations globally. The author, who is not a lawyer, believes that collaborative programs between non-Schengen and Schengen institutions need to be revisited in light of the European AI Act. While the EU Act emphasizes the importance of trust, transparency, and accountability in AI applications, a fresh analysis is needed at how dual degree programs and shared learning management systems are implemented managed and maintained across borders. Non-Schengen institutions will need to ensure their AI systems meet the stringent requirements set forth by the Act, particularly regarding data protection and ethical AI use (Consilium Europa)​.

3. To Complicate Matters: The European Union (EU) Policy Initiative

It is unclear to the author how the European AI  aligns with the  European Union (EU) policy initiative Digital Education Action Plan (2021-2027), which sets out a vision of digital education in Europe to support the adaptation of the education and training systems of Member States to the digital age. The author found that the European Commission Digital Education Action Plan (2021-2027) Priority 1 is Fostering the development of a high-performing digital education ecosystem. Action 1: Structured Dialogue with Member States on digital education and skills. To clarify, no mention is made of collaboration with international HEIs beyond member states. The Digital Education Action Plan is coordinated by the Directorate-General for Education, Youth, Sport, and Culture’s (DG EAC) Digital Education unit and was adopted on 30 September 2020. The action plan Priority 2 focus is on ‘Enhancing digital skills and competencies for the digital transformation,’ such as

  • Action 7: Common guidelines for teachers and educators to foster digital literacy and tackle disinformation through education and training
  • Action 8: Updating the European Digital Competence Framework to include AI and data-related skills
  • Action 9: European Digital Skills Certificate (EDSC)
  • Action 10: Council recommendation on improving the provision of digital skills in education and training
  • Action 11: Cross-national collection of data and an EU-level target on student digital skills

While the EU AI Act and the Digital Education Action Plan (2021-2027) highlight significant steps towards integrating digital education and AI competencies across the EU, for HEIs much is unclear. Especially for international programs that utilize student data, grant dual degrees, apply for shared grant money, use learning management systems (LMS), and proctoring technologies, to name just a few items, there remains an urgent need for greater clarity and alignment between these regulatory frameworks and policy initiatives.

4. Managing Compliance Across Borders

Data protection and compliance will be managed through adherence to both General Data Protection Regulation (GDPR) and the new AI Act’s requirements. Non-Schengen institutions must ensure that any cross-border data transfers comply with EU data protection standards, including obtaining necessary consent and implementing robust data security measures. This will involve close cooperation with Schengen institutions to align data handling practices and ensure mutual compliance.

In the author’s opinion, the European AI Act’s classification of AI systems used in education as high-risk means that shared grading systems and proctoring services between non-Schengen and Schengen institutions will be subject to stringent regulations. These systems must ensure fairness, transparency, and the mitigation of biases. Institutions will need to implement safeguards such as detailed performance documentation and user notification protocols when AI is used in these contexts.

 Article 6 states that: “ an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons.” Annex III: High-Risk AI Systems states that High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas:

3. Education and vocational training:

(a) AI systems intended to be used to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels;

(b) AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels;

(c) AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions at all levels;

(d) AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of or within educational and vocational training institutions at all levels.

According to the AI act overview (p. 3) requirements for providers of high-risk AI systems (Art. 8-25) – which the author understands to be inclusive of HEIs are as follows:

High-risk AI providers must:

  • • Establish a risk management system throughout the high risk AI system’s lifecycle;
  • • Conduct data governance, ensuring that training, validation and testing datasets are relevant, sufficiently representative and, to the best extent possible, free of errors and complete according to the intended purpose.
  • • Draw up technical documentation to demonstrate compliance and provide authorities with the information to assess that compliance.
  • • Design their high risk AI system for record-keeping to enable it to automatically record events relevant for identifying national level risks and substantial modifications throughout the system’s lifecycle.
  • • Provide instructions for use to downstream deployers to enable the latter’s compliance.
  • • Design their high risk AI system to allow deployers to implement human oversight.
  • • Design their high risk AI system to achieve appropriate levels of accuracy, robustness, and cybersecurity.
  • • Establish a quality management system to ensure compliance.

5. Unchartered Territories, High-Risk, and HEIs ‘Brave New World’ in the EU

The European AI Act and the Digital Education Action Plan (2021-2027) embody the EU’s ambition to integrate digital education and AI competencies. However, a critical examination reveals significant shortcomings, particularly concerning the impact on international collaboration for HEIs, research, and international program collaboration. The EU AI Act imposes stringent data collection and high-risk AI regulations, yet it lacks explicit guidelines and support mechanisms for HEIs to navigate these complexities effectively. This regulatory framework not only complicates compliance for HEIs but also risks undermining the vitality and energy of international student programs, which are crucial for fostering global understanding and diplomacy.

One of the most glaring issues is the onus of compliance placed on nin-Schengen HEIs without providing clear guidelines or support structures, apart from what seems to be a well-thought-out and robust way to collect penalties of thousands of dollars for non-compliance. “Penalties: The fines for infringements to the AI act are set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. SMEs and start-ups are subject to proportional administrative fines.” The AI Act mandates that institutions ensure compliance with high-risk AI system requirements, including detailed performance documentation, user notification protocols, and robust data protection measures. However, the Act does not specify whom non-Schengen institutions should approach for guidance or resolution of compliance issues. The author’s preliminary research into The new governance architecture shows the following:

  • An AI Office within the Commission to enforce the common rules across the EU
  • A scientific panel of independent experts to support the enforcement activities
  • An AI Board with member states’ representatives to advise and assist the Commission and member states on consistent and effective application of the AI Act
  • An advisory forum for stakeholders to provide technical expertise to the AI Board and the Commission

There is no specialized ombudsperson or office designated to assist international, non-Schengen HEIs, nor is there a decision tree to help navigate the new regulatory landscape. One wonders about who has the authority to conduct inquiries, mediate conflicts and recommend corrective actions to address issues such as administrative unfairness, misconduct, or violations of rights when interacting with the EU AI office.

This lack of clear support mechanisms leaves HEIs in a precarious position. Institutions are expected to manage compliance autonomously, which can be particularly challenging for those involved in international programs that rely on cross-border data sharing, learning management systems (LMS), proctoring technologies, simulation spaces, online credentials, and their verification, or the use of virtual learning spaces such as metaverses. Without a centralized source of guidance, HEIs must independently interpret the regulations, increasing the risk of non-compliance and the potential for far-reaching legal and possible institutional “death star” financial ramifications.

The next blog

Preparing for the EU AI Act: An Initial Cost Perspective for Global Higher Education Institutions. In this blog, the author explores the preliminary cost considerations for HEIs as they begin to prepare for compliance with the newly enacted European AI Act. While specific data is not yet available, the blog draws on existing regulatory compliance frameworks, research and early estimates to provide a grounded perspective.

Further reading:

European Parliament. (2024). Artificial Intelligence Act Overview. Retrieved from https://artificialintelligenceact.eu/wp-content/uploads/2024/01/AI-Act-Overview_24-01-2024.pdf

This article has been produced by Dr. Jasmin (Bey) Cowin, Associate Professor and U.S. Department of State English Language Specialist (2024) As a Columnist for Stankevicius she writes on Nicomachean Ethics – Insights at the Intersection of AI and Education. Get in touch via LinkedIn.

author avatar
Dr. Jasmin Cowin

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Advertisement