Advertisement
Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
go.stankeviciusmgm.com

The EU AI Act: An Initial Cost Perspective for Global Higher Education Institutions 

This is the second blog exclusively written for Stankevicius. Blog one covered The European AI Act 2024: A Threat to International Academic Collaboration for Higher Education Institutions?  With this series, the author aims to highlight the critical aspects of the AI Act that Higher Education Institutions HEI organizations need to analyze to ensure readiness for the upcoming regulatory changes, specifying important deadlines for meeting these new requirements. 

On July 12, 2024, the EU issued Regulation (EU) 2024/1689, known as the EU AI Act, marking a significant milestone as the first extensive AI regulation globally. This regulation emerged over three years after its initial draft by the EU Commission in April 2021. The AI Act establishes a broad, sector-independent regulatory framework designed to set the standard for AI governance within the EU. This framework is expected to influence both local and global companies and global legislative approaches. Importantly, the AI Act’s scope extends beyond the EU; it applies to entities worldwide that might not be physically present in any EU member state but engage with the EU market. This represents the culmination of a lengthy legislative process, with the Act set to be enforced starting on August 2, 2024. HEIs should familiarize themselves with the implications of this regulation, understand their responsibilities, and prepare for compliance by the designated date.  

Collaborative Degree Programs, Dual Studies, and Student Exchanges 

Advertisement

The European Union’s Artificial Intelligence Act (EU AI Act) has significant implications for HEIs, particularly those offering collaborative degree programs, dual studies, and student exchanges. These programs, which involve students attending several HEIs in different countries or combining academic studies with vocational training, are now subject to new regulations and obligations under the Act. The Digital Education Council in its publication  EU AI Act: What it means for universities states that “We argue that the EU’s system is complex and likely to be resource intensive. Universities inside the EU need to have good governance processes to deal with these regulations and emerging impacts on their operations and registration.” They further state “Based on our analysis of the Act, we made 3 predictions:  1. Innovation and abilities of smaller companies to compete on a global stage are likely to be reduced due to the complicated requirements  2. EdTech solutions dependent on customization might be limited in their potential due to the restrictions on data collection and usage 
3. Companies are more likely to be risk-averse due to the complexity of regulations.”  

Currently, the are limited resources for HEIs to turn to. The European Edtech Alliance (EEA), a non-for-profit, focuses on supporting innovation ecosystems for national trade associations, clusters, incubators, and organizations working with founders and providers of education technology states that the Commission plans to set up “An AI-on-demand Platform as a central European toolbox of AI resources (e.g., expertise, algorithms, software frameworks, development tools) needed for industry and public sector uses.” 

The Risk-Based Approach 

The EU AI Act classifies AI systems according to the level of risk they pose. This risk-based approach is particularly relevant for higher education institutions, as they often utilize AI systems in various capacities, from administrative tasks to student learning and evaluation. For instance, AI systems used in sectors such as education and vocational training are classified as high-risk. These systems must undergo rigorous assessments of their safety, transparency, and data governance before being deployed.  

Prohibited Practices and Exceptions 

The Act bans AI systems that pose an unacceptable risk. This includes systems that can manipulate human behavior to circumvent users’ free will, systems using biometric categorization, and systems that allow ‘social scoring’ by governments. However, there are exceptions for certain practices that would otherwise be prohibited, such as the use of biometric identification systems. 

Key-points of the EU AI Act 

The EU AI Act provides specific guidelines on data governance for all AI systems, including those used by educational institutions. After spending some time analyzing the various EU AI Act documents and annexes the author, who is not a data specialist or lawyer, came to the following conclusion on the key points for HEIs: 

  1. High-Quality Data Sets: High-risk AI systems must use high-quality data sets for training, validation, and testing. These data sets should be managed professionally, considering factors like data collection processes, data preparation, potential biases, and data gaps. 
  1. Data Governance Practices: Training, validation, and testing of data sets should be subject to robust institutional data governance and management practices appropriate for the intended purpose of the high-risk AI system. These practices should concern the relevant design choices, data collection processes, data-preparation processing operations, formulation of assumptions, assessment of the availability, quantity, and suitability of the data sets, examination of biases, measures to detect, prevent, and mitigate biases, and identification of relevant data gaps or shortcomings. 
  1. Relevance and Representativeness: Training, validation, and testing data sets should be relevant, sufficiently representative, and to the best extent possible, free of errors and complete given the intended purpose.

Article 10: Data and Data Governance, states the date of entry into force as July 2026, according to: Article 113 
5. To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: 
(a) the bias detection and correction cannot be effectively fulfilled by processing other data, including synthetic or anonymized data; 
(b) the special categories of personal data are subject to technical limitations on the re-use of the personal data, and state-of-the-art security and privacy-preserving measures, including pseudonymization; 
(c) the special categories of personal data are subject to measures to ensure that the personal data processed are secured, protected, subject to suitable safeguards, including strict controls and documentation of the access, to avoid misuse and ensure that only authorized persons have access to those personal data with appropriate confidentiality obligations; 

Article 10: Data and Data Governance, states the date of entry into force as July 2026, according to: Article 113 

Transparency and Data Governance 

Transparency is a key aspect of the EU Act. For AI systems that interact with people, users must be informed that they are interacting with artificial intelligence. Moreover, providers of AI systems must ensure data governance measures are in place. The data used to train, validate, and test AI systems must be managed carefully to avoid risks and unintended biases. HEIs often utilize AI systems in various capacities, and many of these can be classified as high-risk under the EU AI Act. Here are a few examples: 

  1. Assessment and Evaluation Systems: AI systems used for assessing and evaluating student performance can be considered high-risk. These systems often have significant implications for students’ academic progress and future opportunities, making their accuracy and fairness critical. 
  1. Predictive Analytics: AI systems used for predicting student outcomes, such as academic success or likelihood of dropout, can also be high-risk. These predictions can influence decisions made by both institutions and students, so they must be based on reliable and unbiased data. 
  1. AI Assistants and Chatbots: AI systems that interact with students, such as AI assistants or chatbots, fall under this category. These systems can influence students’ learning experiences and outcomes, making transparency and accountability important. 
  1. Intelligent Tutoring Systems (ITS): ITS’ are designed to provide personalized instruction to students, adapting to their individual needs and learning styles. Given their direct impact on student learning, these systems can be considered high-risk. 
  1. Monitoring Systems: AI systems intended to monitor and detect students who are cheating on tests could also constitute high-risk AI systems. These systems can have profound consequences for students, so they must be accurate and fair. 

Possible Strategies for U.S.-based HEIs to Comply with the European AI Act 

Strategy Responsible Parties Actionable Steps 
Develop Comprehensive Ethical Guidelines Ethics Committee (faculty, AI specialists, legal experts)   
Chief Ethics Officer 
– Establish a dedicated ethics committee Create a detailed ethical framework 
– Align with European AI Act principles 
Implement a Rigorous Bias Auditing Framework Data Science Team (led by the Chief Data Officer) External Auditors – Conduct regular bias audits 
– Use European AI Act recommended tools 
– Publish audit results and corrective measures 
Ensure Diverse and Representative Training Data Data Scientists and Engineers (managed by Chief Data Officer) Collaboration Teams – Source representative training data 
– Regularly review/update datasets 
– Collaborate with European institutions 
Adopt a Balanced Approach to AI Integration Academic Affairs Office (Provost or Chief Academic Officer)   
Faculty Development Teams 
– Integrate AI with traditional methods 
– Train educators in AI integration 
– Monitor impact on learning outcomes 
Launch Comprehensive Digital Literacy Programs IT Department (led by Chief Information Officer)  
Academic Affairs and Faculty 
– Develop digital literacy curricula 
– Offer AI workshops/training sessions 
– Share best practices with European partners 
Establish Robust Redundancy Plans IT Department  Risk Management Office – Develop/document backup systems 
– Regularly test/update redundancy plans 
– Align measures with the European AI Act 
Maintain Regular System Updates and Maintenance IT Department & Management Teams – Schedule regular updates/maintenance 
– Track/manage updates 
– Collaborate with tech vendors for the latest advancements 
Figure 1 HEI Strategy Table by Jasmin (Bey) Cowin

What about the Money? 

The author believes that US HEIs must brace for and proactively prepare for the European Union AI Act, especially those with partnerships involving European institutions or hosting exchange students from the EU. This preparation requires a significant allocation of resources to ensure compliance and maintain fruitful collaborations.   

To get a better grasp on salary expenditures the author searched various databases for estimated salaries for the specified positions based on general industry knowledge and salary databases such as Bureau of Labor Statistics (BLS), Glassdoor, PayScale, Indeed, Salary.com. 

Position Estimated Salary (USD) 
Ethics Committee Member (lawyer)$70,000 – $120,000 
Chief Ethics Officer $150,000 – $220,000 
Chief Data Officer $200,000 – $300,000 
Data Scientist $100,000 – $150,000 
External Auditor $60,000 – $90,000 
Data Engineer $90,000 – $140,000 
Provost $150,000 – $250,000 
Chief Academic Officer $160,000 – $240,000 
Faculty Development Officer $80,000 – $130,000 
Chief Information Officer $170,000 – $260,000 
IT Department Staff $60,000 – $110,000 
Risk Management Officer $90,000 – $140,000 
Figure 2 Salary Table by Dr. Jasmin (Bey) Cowin

Determining a specific budget for compliance is complex and depends on a range of factors, such as the institution’s size, the extent of AI integration, and the current state of institutional AI governance and data management practices. There are a plethora of additional expenses such as software and hardware expenditures, housing and hosting of servers and systems, updates to AI systems, data management practices, redundancy plans, and energy consumption.   It is evident, that institutions need to allocate resources strategically. Perhaps, HEIs should invest in expert consultations or create an in-house task force to fully understand the EU AI Act’s requirements.

Compliance will necessitate updates to AI systems, data management practices, and overhauling existing protocols. This involves costs related to software updates, training programs for staff and students, and developing new policies and procedures. Institutions need to be prepared for these financial outlays to ensure their AI practices are in line with the new EU regulations. Maintaining compliance will also require ongoing audits, regular system updates, and continuous training. This means institutions must plan for a dedicated budget for long-term maintenance and adaptation to evolving regulations, ensuring sustained compliance over time. 

To manage these costs effectively, institutions might consider a phased approach, focusing on one area at a time. For example, they could start with implementing a bias auditing framework before moving on to ensuring diverse training data. This approach allows for a more manageable financial outlay and helps in assessing the costs and benefits of each step. By breaking down the compliance process into smaller, more achievable tasks, institutions can better handle budget constraints while progressively moving toward full compliance. 

However, the author believes the financial burden of these preparations is substantial and could have significant repercussions for smaller, less financially robust US-based HEIs. In the worst scenario, the costs associated with complying with the EU AI Act could lead to tough decisions, such as scaling back or even discontinuing international US – EU university programs. HEIs may find it challenging to allocate the necessary resources without impacting other critical areas of their operations. Consequently, international academic exchange and collaboration programs, vital for academic and cultural exchange for both Schengen countries’ academic institutions and US HEIs, may suffer, leading to a reduction in opportunities for students, faculty, and international research collaboration.

This potential decline in academic exchange underscores the critical importance of fostering mutual understanding and collaboration across borders. The benefits of such collaboration are well-documented in the extensive research on social capital. Coleman (1988) and Putnam (2000) highlight that projects, networks, and partnerships, both within and between communities, are valuable because they breed social trust, foster norms of reciprocity, and create reserves of goodwill that can be invaluable during times of crisis. As Senator J. William Fulbright (1905–1995) once said: “In the long course of history, having people who understand your thought is much greater security than another submarine.” Quoted in New York Times (June 26, 1986) regarding the Fulbright program.

Further reading: 

Article 10: Data and Data Governance | EU Artificial Intelligence Act 

A guide to high-risk AI systems under the EU AI Act – Pinsent Masons 

BENEFITS, CHALLENGES, AND SAMPLE USE CASES OF ARTIFICIAL INTELLIGENCE … 

Coleman, James S. 1988. Social capital in the creation of human capital. American Journal of Sociology 94 (suppl.): S95-S120.

10 AI Risks in Higher Education and Mitigation Strategies 

European Education Area  Quality education and training for all 

European Parliament. (2024). Artificial Intelligence Act Overview. Retrieved from https://artificialintelligenceact.eu/wp-content/uploads/2024/01/AI-Act-Overview_24-01-2024.pdf  

Putnam, Robert D. 2000. Bowling alone: The collapse and revival of American community. New York: Simon & Schuster.

This article has been produced by Dr. Jasmin (Bey) Cowin, Associate Professor and U.S. Department of State English Language Specialist (2024) As a Columnist for Stankevicius she writes on Nicomachean Ethics – Insights at the Intersection of AI and Education. Get in touch via LinkedIn. 

author avatar
Dr. Jasmin Cowin

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Advertisement