AI under control: European Parliament moves landmark regulation to trialogue
In mid-June, the European Parliament (EP) voted on its comments on the proposal for a regulation on artificial intelligence (AI), kick-starting trialogue negotiations. This brings the EU one step closer to adopting the world's first ever comprehensive AI regulation.
The regulation particularly aims to ensure security and increase transparency in the use of this rapidly developing technology. Companies wishing to supply AI systems to the EU market or use them in their operations will have to reckon with several new obligations in the future, depending on the category of the AI system.
The regulation divides AI into four categories according to the level of risk it poses (risk-based approach): unacceptable, high, limited, and minimal. The use of AI falling into the ‘unacceptable’ category (e.g., social scoring) will be prohibited in the EU. Most obligations will affect high-risk AI systems. This category includes many AI systems used in critical infrastructure, education, and recruitment.
The EP plans to negotiate the amending proposals with the Commission and the Council in a trialogue, discussing, e.g., the final definition of AI itself and the criteria for determining high-risk systems. In response to a hot topic of recent months – ChatGPT and similar tools – the EP also proposes that certain obligations should also apply specifically to generative AI systems. The level of sanctions will also be discussed. The EP proposes to raise the maximum fine for non-compliance with the rules set out in the regulation to EUR 40 million, and up to seven percent of a company's total global annual turnover if a corporation violates the rules.
Negotiations in the trialogue could end this year. We could then see the final text of the AI regulation at the beginning of 2024.