The EU is laying the finishing touches on the so-called Artificial Intelligence Act (AIA) – a new law on artificial intelligence that will be relevant for all companies, private as well as public, that use or develop AI or are considering doing so. The plan is for the bill, which was introduced in 2021, to be adopted by the end of this year with effect from 2025/26.
There is no doubt that AI holds enormous potential in terms of innovation, growth and societal benefits. Both for our economy and the environment, for our welfare and health, not to mention the resource and operational benefits across sectors and industries.
Artificial intelligence is already a part of our everyday lives when we use digital services, apps and social media, communicate with chatbots at the municipality or insurance company, drive modern cars or call 112 (Danish emergency telephone number). In light of the rapid technological development, the EU has found it necessary to establish a legal framework to promote development and address the risks associated with AI.
Instead of pausing artificial intelligence development as proposed by over 1,000 tech and AI experts in an open letter in March 2023, the EU wants to promote AI based on European values. The law is, therefore, about promoting safe, reliable and ethical AI while also prohibiting certain types of AI that pose risks and negative consequences for individuals and society.
As mentioned, the law is not yet final. We do not know the final wording, but with the rapid development and obvious benefits, the need to stay updated on future regulations becomes relevant for more and more people. And with the EU’s approach, the path is paved for developing AI systems that serve both organisations, our society and the individual.
This article attempts to provide an initial introduction to the AIA and is for anyone who is already using AI in their organisation or expects to use it in the near future. We will discuss why the EU has decided to create the AIA, what the AIA entails, and what organisations need to do in the aftermath of its adoption.
Europe should be a leader in the world
According to the EU’s latest draft law, an AI system is defined as “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations or decisions that influence physical or virtual environments”.
After difficult negotiations, the EU has landed on a very broad definition encompassing all kinds of AI systems, including classical machine learning and generative AI. The key is to assess the risks posed by the system in order to determine the requirements that must be met before the product can be used.
As mentioned, the EU’s ambition is for the AIA to support Europe in becoming a leader in the development of safe, reliable and ethical artificial intelligence worldwide.
Although the focus is on technical implementation and documentation, the rules are based on a foundation of rights, ethics and democracy. The proposal aims to ensure that fundamental rights are protected while also supporting positive AI development. As stated directly in the text, artificial intelligence should be “a tool for people and be a force for good in society with the ultimate aim of increasing human well-being”.
Although the goal set by the EU in demanding a human-centred approach throughout the AI system’s lifecycle is quite ambitious, it can help ensure the public’s trust in the use of AI. And trust is crucial to promoting AI development in Europe as well as potentially becoming a competitive parameter for Danish and European companies in markets outside the EU.
The four risk levels
The law is based on a risk-based approach which means that the requirements for AI systems differ depending on the risk level of each system. In other words, the greater the potential harm the system could cause to people, the greater the requirements that must be met. We are familiar with this risk-based approach from GDPR which fully applies when processing personal data in connection with the use of AI.
The AIA distinguishes between four risk levels:
- Unacceptable risk
- High-risk AI
- Generative AI
- Limited risk
The first category completely prohibits certain AI systems, while a wide range of requirements are imposed on systems that fall into the high-risk category. High-risk AI is the category that most of the law is centred on. The following sections describe the types of AI systems that fall into each category, followed by an introduction to the requirements imposed after categorisation.
#1 Unacceptable risk
The first level relates to AI systems that are considered unacceptable and, therefore, prohibited from being developed and offered in Europe according to the regulation. This includes AI systems that manipulate people’s subconscious through sublime techniques, distort human behaviour with a risk of physical and mental harm or exploit people’s social or economic situation.
Exactly what this entails still depends on the individual risk assessment, but a presumption could be that the prohibition will curb so-called rabbit holes where algorithms can make a particular behaviour on the internet self-reinforcing, such that especially children and young people with self-harming behaviour automatically get more of that content. Similarly, apps that secretly profile and manipulate children so that they are held to certain games for hours will probably be prohibited.
If the latest draft is adopted, it may also be prohibited to exploit knowledge of indebted young people through targeted marketing of quick loans. In the latest draft, the prohibition has been expanded from the risk of physical and mental harm to the exploitation of people’s social or economic situation.
The use of AI systems for real-time biometric remote identification of individuals in public places is also prohibited, which is likely to ensure that we do not get enforcement methods in Europe like some of the examples seen in, for example, China.
#2 High-risk AI
Included in the high-risk category are AI systems related to critical infrastructure or systems affecting people’s fundamental rights. Although targeted marketing and recommendation systems affect us as humans, they are not considered high-risk systems as long as they are transparent and do not exploit people’s vulnerabilities in secret.
However, AI methods for identifying specific individuals in connection with study admissions, recruitment for vacant positions, criminal offences or asylum will be considered high-risk and must be addressed. This also means that municipalities’ and regions’ use of AI systems to assess access to social benefits or treatment must be subject to special requirements.
The high-risk category is divided into AI systems for toys, aviation, cars, medical devices and lifts (in line with EU product safety legislation) as well as eight specific areas that must be CE-marked and registered in an EU database:
- Biometric identification and categorisation of physical persons
- Handling and operation of critical infrastructure
- Education and vocational training
- Employment, worker management and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Assistance in the legal interpretation and application of the law
The requirements for high-risk AI systems are extensive and can be characterised by two main pillars: a technical pillar covering risk and data management, documentation and registration and a protection pillar requiring AI to comply with existing fundamental rights, ensure transparency and human supervision and have a positive impact on the individual and society.
The technical pillar
Looking at the technical requirements for high-risk AI systems, this includes, among other things, the establishment and maintenance of a risk management system. This must involve mapping and analysing known and predictable risks as well as assessing the risks that may arise when the system is in use – both when used correctly and when misused in a way that can reasonably be predicted.
In addition, there are requirements for data and data management, including that training data is relevant, representative, error-free and complete. Especially the latter can be difficult to specify in the AI practice we know today. The extent of what it means for data to be error-free and complete still needs to be uncovered and tested. However, continuous documentation in the development phase of AI systems only becomes more important.
However, it is also seen that the EU has developed the regulation based on existing best practice, so if you run a relevant MLOps setup, many of the technical requirements should be met or possible to implement as part of an existing setup.