What does EU Artificial Intelligence regulation mean for AI in education?
A look at how the EU AI Act potentially affects AI's use in educational settings
It’s hard to miss the potentially transformative impact that growing capabilities in AI are likely to have on education. Generative AI, for instance, is beginning to open up unprecedented opportunities around personalized learning, teaching-augmentation, and massively scalable learning platforms.
Yet the future of AI in education is deeply dependent on how emerging AI regulations open up or close down opportunities. This is especially true when it comes to the European Artificial Intelligence Act, which is set to become the first major set of AI regulations out of the blocks.
Navigating AI’s promise and pitfalls in education
Given the flurry of concerns around ChatGPT and “cheating in class” earlier this year, it may be tempting to think that AI regulations will be a boon for education. But as ever, things are far from black and white here.
Of course, AI platforms like ChatGPT proved to be a serious headache for instructors who relied on copious written assignments when they first came out. But many are seeing this as a temporary glitch that distracts from the true potential of AI to positively change the ways we teach and the ways that students learn.
I still come across many people — some of them leading technologists and thought leaders, and all of them well-meaning — who think that teaching is still all about learning by rote and an obsession with cheating. Yet most educators I know and work with are light years away from these very old fashioned ideas. Education these days is about using creative and evidence-based approaches that enable learners to develop the skills and knowledge they need to succeed. As a result there’s a growing emphasis on learning as a journey rather than outmoded ideas of what “an education” is.
The upshot of this is that many educators are less concerned about how AI undermines old ways of teaching, and more interested in how emerging capabilities can help better-serve learners of all ages, abilities, and backgrounds.
Here, the potential is profound. I’m already seeing hints of this in the course I’m currently teaching that uses ChatGPT to learn how to better-use ChatGPT. If we get AI right, we have the potential to scale education far beyond the current limitations of in-person and online teaching; to make learning more accessible and more effective than ever before; to provide students with personal AI-based learning pathways that are uniquely tailored to their abilities and potential; and to provide educators with tools and resources that vastly amplify their reach and impact while supporting their health and wellbeing.
But all of this will depend on policies and regulations being put in place that support the creative, innovative, and impactful use of AI in education, rather than stifling it.
This makes it all the more important that educators understand the implications of emerging AI regulations, and their (or their institution’s) roles in helping ensure that pathways remain open for the responsible, beneficial, and transformative use of AI in education.
The EU AI Act
The first major set of AI regulations to be put in place is likely to be the European Union Artificial Intelligence Act. While the Act was first proposed in 2021, a series of amendments adopted by Members in the European Parliament on June 14 bring this closer to being the first AI regulations of significance to be implemented.
What makes the EU AI Act especially important is the global reach that EU regulations typically have. Even though the Act will only legally be implemented in member states, it has the potential to create barriers and opportunities for organizations and companies in countries outside the European Union that trade or otherwise engage with the EU — especially if they are out of compliance. This is something we’ve seen with EU General Data Protection Regulation (GDPR) and other European regulations. The draft regulation already contains language that allows for this as it specifies that the regulation applies to “providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country”.
Even where there aren’t direct barriers to engagement, there’s a strong likelihood that other countries — including the US — will take a lead from Europe, simply because it’s often easier to align and harmonize than not to, unless there are strong economic reasons to do otherwise.
The EU AI Act and education
So how might the EU AI act potentially impact AI in education?
The good news is that education is identified as an area of positive application in the original draft of the regulation — although in a rather lukewarm way (I added the emphasis below in case you missed the education reference):
Recital (3): “Artificial intelligence is a fast evolving family of technologies that can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.”
The bad news though is that education is also flagged up in the “high risk” category of AI applications. This, though, requires some unpacking to appreciate what it means in practice.
The EU AI Act takes a risk-based approach to regulating AI systems and applications, with systems typically being classified as unacceptable risk, high risk, and limited risk.
The unacceptable risk category includes applications that result in or are intended for cognitive behavioral manipulation, social “scoring” — much like China’s social credit system — and real-time and remote biometric identification systems such as facial recognition. In principle this doesn’t apply directly to education … until novel education-based systems threaten to cross the line of “cognitive behavioral manipulation.”
I suspect that this isn’t a concern in the near term, but I have to wonder how many innovators are thinking about how to use AI to guide and direct individual students along specific learning paths, and the degree to which they are considering using AI’s persuasive and manipulative power to achieve this. It’s certainly an early warning to anyone hoping to emulate Neil Stephenson’s "illustrated primer” from the book The Diamond Age.
These concerns aside, it’s the high risk category that is potentially of most interest to educators. This is the category in the act that comes with the most stringent regulatory requirements, and includes the use of AI in law enforcement, democratic processes, critical infrastructure, border control management, biometric identification, and employment-based decisions.
It also includes education and vocational training.
To be clear, the high risk aspects of education and vocational training that are highlighted include AI systems intended to be used for the purpose of determining access to educational and vocational training institutions, and AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions or assessing requirements for admission.
However, the inclusion of education here should give educational institutions pause for thought.
Thankfully the language in the Act focuses specifically on the use of AI to make decisions in education that impact students participation in programs. It does not directly address the use of AI in learning. However, there is potential ambiguity here, which is perhaps why education features more heavily in a series of amendments to the Act that were published in June of this year.
The June 14 amendments to the EU AI Act
On June 14, 771 amendments to the original draft act were proposed and adopted. Many of these address issues around foundation models and generative AI that emerged since the original act was drafted. They also address AI in education more specifically.
There are six amendments in total that substantively relate to AI in education and learning, although there are others that have the potential to indirectly impact education as well. It’s worth going though all six of these one by one:
Amendment 65
Amendment 65 directly addresses the identification of education and vocational training as high-risk application areas as stated in recital 35 of the original draft.
The original text stated:
“AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination.”
This is amended to:
“Deployment of AI systems in education is important in order to help modernise entire education systems, to increase educational quality, both offline and online and to accelerate digital education, thus also making it available to a broader audience. AI systems used in education or vocational training, notably for determining access or materially influence decisions on admission or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education or to assess the appropriate level of education for an individual and materially influence the level of education and training that individuals will receive or be able to access or to monitor and detect prohibited behaviour of students during tests should be classified as high-risk AI systems, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems can be particularly intrusive and may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation.”
This amendment makes a clear statement about the potential benefits of AI in education — so much so that it’s worth reiterating it as it underlines the role of AI to “help modernise entire education systems, to increase educational quality, both offline and online and to accelerate digital education, thus also making it available to a broader audience.”
In other words, while the act protects against discriminatory uses of AI in education, it supports systems designed to enhance education. The question here though is how easy it will be to distinguish between AI systems that enhance education and those that discriminate against some learners — especially as in some cases the two are likely to appear to go hand in hand.
Amendment 214
Amendment 214 is one of two amendments that replace Article 4 in the draft regulation, and that address general principles applicable to all AI systems (amendment 213) and AI literacy (amendment 214).
Amendment 213 is one of the more controversial provisions as it directly addresses the development and use of the foundation models underlying applications like ChatGPT and other generative AI systems. This will potentially impact more advanced applications of AI in education — especially as education-specific foundation models are developed. However it’s amendment 214 that is more directly applicable to near-term educational opportunities and challenges.
Amendment 214 states:
“1. When implementing this Regulation, the Union and the Member States shall promote measures for the development of a sufficient level of AI literacy, across sectors and taking into account the different needs of groups of providers, deployers and affected persons concerned, including through education and training, skilling and reskilling programmes and while ensuring proper gender and age balance, in view of allowing a democratic control of AI systems
“2. Providers and deployers of AI systems shall take measures to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on which the AI systems are to be used.
“3. Such literacy measures shall consist, in particular, of the teaching of basic notions and skills about AI systems and their functioning, including the different types of products and uses, their risks and benefits.
“4. A sufficient level of AI literacy is one that contributes, as necessary, to the ability of providers and deployers to ensure compliance and enforcement of this Regulation.”
Ignoring point 4 for a moment, this amendment sets a clear mandate for incorporating AI literacy into educational systems, programs, and opportunities. It’s a green light for educational organizations to develop AI-specific programs for their students, as well as ensuring all students receive basic training in AI literacy.
From the perspective of a higher education institution (as this is where I teach), this amendment underlines the urgency with which we need to incorporate AI literacy and skills into all areas of the curriculum — not just for engineers and computer scientists, but in the arts, humanities, social sciences, and beyond. It also emphasizes the importance of ensuring our students understand how to navigate the risks as well as the benefits of AI systems.
Building on the Act, I’d go so far as to say that universities should consider AI literacy as a core set of competencies that come under the “general education” or “general studies” requirements that many institutions have.
Of course, things unravel a little here with the fourth point that defines AI literacy as the “ability of providers and deployers to ensure compliance and enforcement of this Regulation.” Hopefully we can work on this — in fact we have to, as AI literacy in the context of co-creating a more vibrant AI-enabled future goes far beyond understanding how to comply with a specific set of regulations.
Amendment 226
Article 5 in the draft Act lists AI practices that are to be prohibited. The amendments make a number of changes to these, including amendment 226 which adds:
“the placing on the market, putting into service or use of AI systems to infer emotions of a natural person in the areas of law enforcement, border management, in workplace and education institutions.”
This may not seem such a big deal for education, until questions arise around the use of AI to better understand the emotional state of students in order to better address their learning and wellbeing.
It’s going to be interesting to see how this plays out.
Amendments 715, 717, and 718
The original draft of the EU AI Act lists specific high risk applications of AI in Annex III. The June 14 amendments address these in some depth.
Here, amendments 715, 717, and 718, specifically address education. Of these, 715 is a modification of an existing clause, while 717 and 718 are new. All address AI systems considered to be high risk
Amendment 715 addresses:
“AI systems intended to be used for the purpose of determining access or materially influence decisions on admission or assigning natural persons to educational and vocational training institutions”
Amendment 717 addresses:
“AI systems intended to be used for the purpose of assessing the appropriate level of education for an individual and materially influencing the level of education and vocational training that individual will receive or will be able to access”
And amendment 718 addresses:
“AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of/within education and vocational training institutions”
All of these are potentially problematic in using AI to tailor educational opportunities to individuals while maximizing learning.
Amendment 715 is laudably aimed at addressing educational injustice where AI could lead to discriminatory admission and access processes. However, I have to wonder whether it will also prohibit the use of AI to better match students with programs that serve their needs and aspirations.
Amendment 717 is again aimed at reducing or removing discriminatory educational practices. However, it could also be used to block AI-mediated personalized education and learning plans, where opportunities and pathways are individually crafted based on where a student’s currently at and their personal educational needs.
Whether this is an issue for personalized AI learning programs or AI tutors remains to be seen, but things could get tricky here.
Amendment 718 is perhaps less contentious given concerns around the trustworthiness of AI systems used to detect inappropriate behavior by students — especially when it comes to writing. However, students using generative AI in particular to inappropriately complete assignments remains a major concerns within many educational establishments, and there are already substantial investments being made to use AI to detect and address prohibited behavior. For example TurnitIn’s AI writing detection system is at the forefront of detecting when students submit AI-generated text as their own work.
The degree to which this amendment may either limit the ability of educators to govern the use of AI in class, or for companies that develop AI detection tools to operate within Europe, remains to be seen — and it may be that this provision forces more creative approaches to integrating AI into teaching. But my guess is that it will cause some heartburn in the short term.
The bottom line
The bottom line here is that AI in education will most likely be impacted by the EU AI Act in a number of ways, and educational establishments the world over need to be keeping a close eye on this if they’re to ensure they don’t fall foul of it.
That said, the degree to which the Act will influence the AI policy landscape outside the EU is not yet clear. However, it does open up a window of opportunity for educators and educational establishments to work closely with policy makers to ensure that appropriate guardrails are put in place that support learning without stifling innovation.
Because at the end of the day, AI done right could be a game change for how we all learn.
Let me know if you'd like to write a guest post on A.I. Supremacy Andrew, via a DM on LinkedIn.