Article

The EU Artificial Intelligence Act

Published

12 December 2023

The EU is laying the finishing touches on the so-called Artificial Intelligence Act (AIA) – a new law on artificial intelligence that will be relevant for all companies, private as well as public, that use or develop AI or are considering doing so. The plan is for the bill, which was introduced in 2021, to be adopted by the end of this year with effect from 2025/26.


Introduction 


There is no doubt that AI holds enormous potential in terms of innovation, growth and societal benefits. Both for our economy and the environment, for our welfare and health, not to mention the resource and operational benefits across sectors and industries.


Artificial intelligence is already a part of our everyday lives when we use digital services, apps and social media, communicate with chatbots at the municipality or insurance company, drive modern cars or call 112 (Danish emergency telephone number). In light of the rapid technological development, the EU has found it necessary to establish a legal framework to promote development and address the risks associated with AI.


Instead of pausing artificial intelligence development as proposed by over 1,000 tech and AI experts in an open letter in March 2023, the EU wants to promote AI based on European values. The law is, therefore, about promoting safe, reliable and ethical AI while also prohibiting certain types of AI that pose risks and negative consequences for individuals and society.


As mentioned, the law is not yet final. We do not know the final wording, but with the rapid development and obvious benefits, the need to stay updated on future regulations becomes relevant for more and more people. And with the EU’s approach, the path is paved for developing AI systems that serve both organisations, our society and the individual.


This article attempts to provide an initial introduction to the AIA and is for anyone who is already using AI in their organisation or expects to use it in the near future. We will discuss why the EU has decided to create the AIA, what the AIA entails, and what organisations need to do in the aftermath of its adoption.


Europe should be a leader in the world

According to the EU’s latest draft law, an AI system is defined as “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations or decisions that influence physical or virtual environments”.


After difficult negotiations, the EU has landed on a very broad definition encompassing all kinds of AI systems, including classical machine learning and generative AI. The key is to assess the risks posed by the system in order to determine the requirements that must be met before the product can be used.


As mentioned, the EU’s ambition is for the AIA to support Europe in becoming a leader in the development of safe, reliable and ethical artificial intelligence worldwide.


Although the focus is on technical implementation and documentation, the rules are based on a foundation of rights, ethics and democracy. The proposal aims to ensure that fundamental rights are protected while also supporting positive AI development. As stated directly in the text, artificial intelligence should be “a tool for people and be a force for good in society with the ultimate aim of increasing human well-being”.


Although the goal set by the EU in demanding a human-centred approach throughout the AI system’s lifecycle is quite ambitious, it can help ensure the public’s trust in the use of AI. And trust is crucial to promoting AI development in Europe as well as potentially becoming a competitive parameter for Danish and European companies in markets outside the EU.


The four risk levels


The law is based on a risk-based approach which means that the requirements for AI systems differ depending on the risk level of each system. In other words, the greater the potential harm the system could cause to people, the greater the requirements that must be met. We are familiar with this risk-based approach from GDPR which fully applies when processing personal data in connection with the use of AI.


The AIA distinguishes between four risk levels:

  1. Unacceptable risk
  2. High-risk AI
  3. Generative AI
  4. Limited risk

The first category completely prohibits certain AI systems, while a wide range of requirements are imposed on systems that fall into the high-risk category. High-risk AI is the category that most of the law is centred on. The following sections describe the types of AI systems that fall into each category, followed by an introduction to the requirements imposed after categorisation.


#1 Unacceptable risk


The first level relates to AI systems that are considered unacceptable and, therefore, prohibited from being developed and offered in Europe according to the regulation. This includes AI systems that manipulate people’s subconscious through sublime techniques, distort human behaviour with a risk of physical and mental harm or exploit people’s social or economic situation.


Exactly what this entails still depends on the individual risk assessment, but a presumption could be that the prohibition will curb so-called rabbit holes where algorithms can make a particular behaviour on the internet self-reinforcing, such that especially children and young people with self-harming behaviour automatically get more of that content. Similarly, apps that secretly profile and manipulate children so that they are held to certain games for hours will probably be prohibited.


If the latest draft is adopted, it may also be prohibited to exploit knowledge of indebted young people through targeted marketing of quick loans. In the latest draft, the prohibition has been expanded from the risk of physical and mental harm to the exploitation of people’s social or economic situation.


The use of AI systems for real-time biometric remote identification of individuals in public places is also prohibited, which is likely to ensure that we do not get enforcement methods in Europe like some of the examples seen in, for example, China.


#2 High-risk AI


Included in the high-risk category are AI systems related to critical infrastructure or systems affecting people’s fundamental rights. Although targeted marketing and recommendation systems affect us as humans, they are not considered high-risk systems as long as they are transparent and do not exploit people’s vulnerabilities in secret.


However, AI methods for identifying specific individuals in connection with study admissions, recruitment for vacant positions, criminal offences or asylum will be considered high-risk and must be addressed. This also means that municipalities’ and regions’ use of AI systems to assess access to social benefits or treatment must be subject to special requirements.


The high-risk category is divided into AI systems for toys, aviation, cars, medical devices and lifts (in line with EU product safety legislation) as well as eight specific areas that must be CE-marked and registered in an EU database:

  • Biometric identification and categorisation of physical persons
  • Handling and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Assistance in the legal interpretation and application of the law

The requirements for high-risk AI systems are extensive and can be characterised by two main pillars: a technical pillar covering risk and data management, documentation and registration and a protection pillar requiring AI to comply with existing fundamental rights, ensure transparency and human supervision and have a positive impact on the individual and society.


The technical pillar


Looking at the technical requirements for high-risk AI systems, this includes, among other things, the establishment and maintenance of a risk management system. This must involve mapping and analysing known and predictable risks as well as assessing the risks that may arise when the system is in use – both when used correctly and when misused in a way that can reasonably be predicted.


In addition, there are requirements for data and data management, including that training data is relevant, representative, error-free and complete. Especially the latter can be difficult to specify in the AI practice we know today. The extent of what it means for data to be error-free and complete still needs to be uncovered and tested. However, continuous documentation in the development phase of AI systems only becomes more important.


However, it is also seen that the EU has developed the regulation based on existing best practice, so if you run a relevant MLOps setup, many of the technical requirements should be met or possible to implement as part of an existing setup.

Technical standards on the way


An important element of the AIA is that the rules are unfolded and detailed in technical standards, which are currently being developed.


The protection pillar


Regarding rules that concern fundamental rights and protection, emphasis is placed on transparency and communication of information to the user. This is to ensure that users of the AI system can interpret the output and use it correctly, including the development of sufficient user instructions for the end user.


In addition to transparency, requirements for human supervision are emphasised, and there are requirements that “physical persons can effectively supervise” during the period when the AI system is in use. This should be designed in a way that makes it possible to intervene and interrupt the operation of the AI system, if necessary. One way to approach this area could be to develop data ethical principles in the organisation and ensure education in the use of artificial intelligence across management and employees.


#3 Generative AI


Generative AI refers to artificial intelligence that is capable of generating new content, such as texts, images and videos, based on existing data. Many will be familiar with this from ChatGPT, which has quickly become widespread among millions of users.


As the large increased availability of AI in the last ten months has mainly been driven by generative AI, the EU Commission has dedicated a separate section to this type of AI. While high-risk AI systems are subject to significant requirements, not nearly as much is required of generative systems. Here, the focus is primarily on preventing the generation of illegal content and informing users that the content they interact with has been generated using AI. 


Additionally, any use of data protected by copyright must be mentioned, which particularly affects many of the discussions currently underway regarding training material in the large language models.


#4 Limited risk


AI systems with limited risk must meet requirements for transparency that enable users to make an informed choice. This means that users must be aware when they are interacting with AI. This includes, in principle, generative AI systems that generate and manipulate image, sound or video content (such as deepfakes).


AI with limited risk can be personal assistants and advertising platforms that use algorithms for advertising. Presumably, many AI solutions that we know from computer games, mobile phones and social media will have limited risk (but any manipulation techniques must be stopped). And there will undoubtedly be even more existing solutions, such as spam filters, route planning or spellcheck systems, that are considered limited risk and therefore fall completely outside the scope of the AIA.


For AI that falls outside the high-risk category, member states are encouraged to develop codes of conduct that companies can follow instead. In Denmark, the so-called D-label (D-mærket), a certification scheme for IT security and responsible data use, is relevant. The D-label is founded by Dansk Industri, Dansk Erhverv, SMVdanmark and Forbrugerrådet Tænk and financed by Industriens Fond. The target group is both companies that work with AI or want to do so and companies that generally want to improve IT security in their organisation.

Who is this applicable to?


The rules in the AIA are relevant throughout the entire AI value chain, i.e. both when AI systems are developed, sold or shared or when AI systems of others are used and applied in the European market. According to the proposal, this means that a range of parties, from distributors, importers, developers, providers, users and other third parties, may be subject to the rules and must deal with how they approach the AI agenda. At present, it is still unclear how responsibility will be distributed among the involved parties.


Next step 


Since the AIA is a regulation, the rules will immediately become applicable and thus, as a starting point, uniform across the member states in Europe. Companies that already use AI or plan to do so will have up to two years to comply with the rules before the authorities apply them. In this context, it is essential to be aware that risk assessment, transparency, documentation and labelling must be in place before a high-risk product is launched on the market or put into internal use. In this way, the AIA works differently than GDPR.


An important first step is, therefore, to map out one’s current AI applications and assess the risk level they fall under. Only then is it possible to assess what measures need to be taken. In addition, mechanisms must be set up in one’s innovation processes to ensure that the development of new AI solutions is always risk-assessed and meets the requirements of the regulation. 


If no current AI solutions are in use, another step could be to start the value-based discussion within the organisation so that consideration for rights and data ethics is in place from the start once AI begins to be adopted.


The increased attention around AI in the last ten months has made many realise that AI can both create benefits and challenges. With the EU’s artificial intelligence law, we get, for the first time, a proposal on how to ensure that innovation and development within AI go hand in hand with rights and data ethics. Now awaits the final adoption of the proposal, followed by the work of making the rules work in practice and bringing the systems to life – in a safe, ethical and reliable way.

The AI lifecycle


The creation of AI systems follows a series of steps that make up the system’s lifecycle. This includes:

  1. Data preparation: In order to train the model, it is necessary to have high-quality data. The first step is, therefore, to collect, clean and prepare the data for training AI models.
  2. Training, testing and validation: The model is then trained by providing it with large amounts of data and instructions. The model is adjusted and fine-tuned until it can provide accurate results. Once the model is trained, it is tested to ensure that its results are accurate and reliable. This step typically involves splitting the data into training, testing and validation datasets and comparing the results.
  3. Production: After testing and validation, the model is put into production, where it is used to solve real-world problems. This typically involves integrating with other systems and monitoring the model’s performance over time.
  4. Maintenance and optimisation: To ensure that the model continues to deliver reliable results, it requires maintenance and optimisation. This typically involves monitoring the model’s results and adjusting the parameters, if necessary.
  5. Decommissioning: Finally, it may be necessary to decommission the model if it is no longer needed or if its results are insufficient. This typically involves shutting down the model and removing its data and other resources.


MLOps


MLOps stands for machine learning operations and refers to the processes and technologies used to automate and manage the lifecycle of AI models. This typically includes automating the training, testing and deployment of models, managing data and model versioning and monitoring and optimising models in production. MLOps is an important part of the AI system lifecycle as it enables efficient management and maintenance of models over time.

Related0 4