Back to article list

AI Act enters into force! Are you prepared?

On 2 August 2024, Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (AI Act) entered into force. The AI Act provides rules for the safe, responsible, and transparent use of artificial intelligence (AI) in the EU. Over the next two years, companies using AI will have to prepare for numerous new obligations.

The AI Act impacts a wide range of entities, from AI system manufacturers to end users. If your company develops AI systems or uses off-the-shelf (ready-made) AI solutions in its operations, you should pay close attention to the regulation. First, companies will need to ask themselves whether their solutions meet the definition of an AI system under the AI Act. If yes, the next step will be to assess what level of risk the use of a particular AI system poses. The regulation classifies AI systems according to their risk level, and, generally, the higher the risk an AI system poses, the more obligations will fall upon the particular entity.
 

  1. The first category comprises AI systems whose use poses an unacceptable risk. Their use and supply in the EU will be prohibited altogether. This category includes, e.g., AI systems for social scoring, systems designed to manipulate or biometrically categorise people based on age, gender, or race, and other similar systems that are contrary to EU core values.
  2. ​The main objective of the new regulation is to cover the category of high-risk systems. This may include AI systems used in critical infrastructure (transport industry, energy sector), access to essential services (healthcare, banking, insurance, welfare), education, employment or law enforcement, or other systems identified according to a comprehensive key contained in the AI Act. The use of these AI systems could threaten fundamental rights, freedoms, and security, and therefore this category is subject to the strictest regulation and a wide range of obligations for the entities concerned.
  3. AI systems where the risk lies in a lack of transparency will be subject to numerous information obligations to ensure that the subject who is exposed to an AI interaction (e.g. chatbot, deepfake content, etc.) is informed of this fact and is thus free to decide whether to continue or stop the interaction.
  4. In addition to the above categories, the AI Act also regulates general-purpose AI models and general-purpose AI models with systemic risk. This special category was included in the AI Act in response to the advent of ChatGPT and similar large language models.

     

It is high time for companies that want to supply AI systems to the European market or deploy them in their operations to start preparing for the new regulation. Within six months, i.e., by 2 February 2025, they must identify and prevent the use of banned AI systems. Failure to comply with this ban carries strict penalties of up to EUR 35 million or 7% of their global annual turnover. From 2 August 2025, obligations for general-purpose AI models will enter into effect. The widest range of obligations impacting high-risk systems will enter into effect in stages, on 2 August 2026 and 2 August 2027.