Second wave of effects of the AI Act


Half a year after the ban on unacceptable artificial intelligence systems took effect, the European artificial intelligence regulation is entering its next phase. Since 2 August 2025, the second wave of rules of EU Regulation 2024/1689 (AI Act) have been in effect. The regulation is primarily aimed at providers of general-purpose AI models including ChatGPT, Llama, Gemini and other well-known large language models.
General-purpose AI models - new rules
General-purpose AI models are highly versatile and adaptable to numerous application areas. They can be easily linked to a wide variety of applications, implemented in new models, and further adapted to specific needs. General-purpose models are usually trained on huge amounts of data ranging from text and images to audio and video. The new regulation aims to eliminate the many risks associated with their development and use.
The new rules are primarily intended to ensure that general-purpose models are developed with an emphasis on transparency and accountability and that their use is safe and reliable. Creators are required to disclose information about model capabilities, computational resources, potential risks and data used, including whether they used copyrighted work in their training. At the same time, they must maintain detailed technical documentation and provide other developers with the necessary documentation to safely use and integrate the models into their solutions.
An even stricter regime will apply to highly competent and powerful general-purpose models that are referred to as general-purpose AI models with systemic risk due to their potential impact on the European market or on public health, safety, fundamental rights, and society. These models are subject to registration with the European Commission, must be regularly tested, including adversarial testing, and their providers must continuously monitor their behaviour, report incidents and ensure a high level of cyber security.
Code of practice and guidance for general-purpose AI models
In connection with the new rules, in July, the European Commission presented the final version of a voluntary code of practice for general-purpose AI models, which is the result of almost a year of joint work involving EU and national authorities, academics, experts and representatives of major technology companies. The code of practice focuses on transparency, copyright compliance and security and intends to promote responsible technology development. Once approved by the member states and the Commission, the code of practice will allow signatories to demonstrate their compliance with the AI Act. It will also serve as a practical framework for downstream providers, i.e., providers of AI systems operating specifically based on general-purpose models.
To date, the code of practice has been signed by leading technology companies including OpenAI, Google, Microsoft, and Anthropic. However, some companies are reluctant to sign the code, such as Meta, which has refused to sign it, or xAI, which has only agreed to the security chapter. According to critics, the code is, among other things, an unnecessary regulation that may hamper the development of artificial intelligence in Europe. However, broad support is key to making the code as widely used in practice as possible and to creating a unified approach to its development.
Further guidance on the interpretation of the obligations is provided by the non-binding guidelines on the scope of obligations of providers of general-purpose AI models under the AI Act issued by the European Commission in connection with the commencement of the rules’ effectiveness. The document specifies, among other things, when an AI model is considered a "general-purpose model", how to determine its provider, and the conditions under which downstream providers may occur in this position.
Starting the sanction mechanism and what to do next
The AI Act imposes obligations on AI system providers and users as well as on the member states. They had until 2 August 2025 to empower the relevant national authorities to oversee compliance with the AI Act. In the Czech Republic, this agenda fell, among others, to the Ministry of Industry and Trade, which is responsible for the preparation of the Czech act on artificial intelligence. The Czech Telecommunications Office has been designated as the main supervisory authority.
If your organisation is developing or using AI-based solutions and has not yet started preparing for the AI Act, do not delay. In fact, despite (failed) efforts to postpone the effectiveness of the AI Act, the time to prepare is quickly running out. Moreover, along with the effectiveness of the general-purpose model rules, under the AI Act, a sanction mechanism will be triggered. The most serious breaches will be subject to fines of up to EUR 35 million or seven per cent of a company's worldwide turnover, while less serious breaches will be subject to fines of EUR 7.5 million to EUR 15 million or one to three per cent of worldwide turnover.
In practice, the process of developing and implementing the necessary measures can be time-consuming and usually requires the involvement of various experts across organisations. We therefore recommend starting as soon as possible with a clear mapping of the AI tools in use, categorising them by risk, determining your organisation's role and identifying the associated responsibilities. If you are not sure about the AI Act or just need to make sure you are moving in the right direction, do not hesitate to contact us.