Back to article list

Riskiest systems stopped: AI Act in effect

As of August 2024, the EU Regulation laying down harmonised rules on artificial intelligence (the AI Act) has been in force. A first wave of effects has arrived, as from 2 February 2025 the regulation bans the use of certain AI systems that the legislators judge to pose an unacceptable risk to society.

We previously reported on the main points of the AI Act (Regulation (EU) 2024/1689 of the European Parliament and of the Council) here.

The ban affects particularly problematic AI systems that may become a tool for unethical practices abusing power and control and may lead to discriminatory and unjustified treatment. These practices are contrary to the key values, public interests, and fundamental rights recognised and protected by EU law, such as human dignity, freedom, equality, the right to privacy, and others. Accordingly, the EU legislators considered it necessary to prohibit such strictly defined and unacceptable practices in AI to prevent their occurrence in the territory of the European Union.

Companies developing or using AI have had the last six months to map their AI systems and assess whether they fall within the category of the prohibited practices. If so, they should have prevented their use.

The prohibition of AI systems with unacceptable risk is the first and crucial step towards the responsible and ethical use of AI. It sets clear boundaries for the use of the technology and thus contributes to creating a safer and more transparent environment for its users and others.


AI systems whose placing on the market, putting into service, and use are prohibited include:

  • AI systems that influence users beyond their consciousness through subliminal, manipulative, or deceptive techniques. Further, AI systems that exploit the vulnerabilities of persons (e.g. children or the elderly who are more susceptible to deception or manipulation), with the objective or the effect of influencing the persons’ behaviour and interfering with their freedom of choice. An important prerequisite is the occurrence or at least the reasonable likelihood of harm.
  • AI systems for social scoring, used to evaluate or classify persons or groups of persons based on their social behaviour, personal or personality characteristics, with the social score leading to their detrimental or unfavourable treatment in unrelated social contexts, or to treatment that is disproportionate to their social behaviour or its severity. The main concern about these systems is that the outcome could be discriminatory and lead to the exclusion of certain groups.
  • AI systems for predicting the risk of a person committing a criminal offence based solely on the profiling of a natural person or assessing their personality traits and characteristics (e.g. nationality, place of residence, financial standing, etc.). Such a procedure is contrary to the presumption of innocence, as criminal prediction should be based on the person’s actual behaviour.
  • AI systems that create or expand facial recognition databases by the untargeted collecting of facial images from the internet or CCTV footage. This method of data collection reinforces the impression of blanket surveillance and constitutes an infringement on an individual's fundamental right to privacy.
  • AI systems to infer the emotions of persons at their workplace or education institutions based on their biometric data. These systems are problematic because of their reliability and accuracy, as individuals' expression of emotions can vary widely from one situation to another and can lead to the disadvantageous treatment. The only exceptions are AI systems used for medical or safety reasons.
  • AI systems for biometric categorisation that categorise individuals based on biometric data to infer their race, political views, religious beliefs, etc. The only exceptions to this prohibition are the filtering of lawfully acquired biometric datasets and the categorisation of biometric data in the context of law enforcement to protect public order and safety.
  • With certain exceptions, law enforcement agencies may not use AI systems for real-time remote biometric identification in publicly accessible spaces to prevent or prosecute crime and execute a criminal penalty. Their use is allowed only under specified conditions and in exceptional cases, such as searches for victims of abduction and missing persons, the localisation of persons suspected of having committed serious crimes or the prevention of substantial and imminent threats.

 
If you are using AI systems and have concluded that the AI Act will affect you, it is high time you started preparing. At a minimum, take stock of what AI systems you have in your company, whether you are a mere user or are also developing and marketing them, what the function of the identified systems is, and what risk category under the AI Act they fall into. If you find that any of the AI systems falls within the category of prohibited systems, modify its operation as soon as possible or stop using or marketing it in the EU altogether. Failure to do so exposes you to the threat of heavy fines, which can reach up to EUR 35 million or 7% of your worldwide annual turnover.