Skip to main content

The ‘AI Act’: A Milestone in Harmonizing Artificial Intelligence Regulations

The European Parliament and Council have made a significant breakthrough by agreeing on the ‘AI Act’ on 8 December, 2023, a legislation that establishes unified rules for the development and deployment of AI technologies in the European Union (EU). The act prioritizes ethics, transparency, and accountability to promote innovation while safeguarding the rights and well-being of EU citizens.

The AI Act addresses the need for comprehensive regulations in response to the rapid advancements in AI technology and its potential risks. It introduces a risk-based approach, categorizing AI systems into different levels of risk. High-risk applications, such as those in critical infrastructure and healthcare, will face stricter requirements, including mandatory assessments and human oversight. Transparency and accountability are key aspects of the AI Act, Developers and providers of AI systems must provide clear and understandable information about their capabilities and limitations, enabling users to make informed decisions. Human oversight is also emphasized to ensure individuals retain responsibility for AI system decisions.

To protect fundamental rights and prevent misuse, the AI Act prohibits certain practices, such as AI-enabled social scoring and real-time biometric identification in public spaces, unless strictly justified and compliant with EU law. Boundaries are set to safeguard privacy and protect citizens from unwarranted surveillance. The AI Act highlights the importance of international cooperation and establishes the European Artificial Intelligence Board. This board will facilitate collaboration among EU member states, ensuring consistent enforcement of the legislation and fostering partnerships to address global AI challenges.

The initial proposal for the AI Act categorized AI systems into four groups

  1. prohibited, 
  2. high risk, 
  3. low risk, and 
  4. minimal risk. 

However, due to recent advancements in generative models like ChatGPT, a fifth category called general-purpose AI systems has been added. This category will have stricter regulations, including mandatory disclosure of training data and higher penalties for non-compliance. The formal adoption of the AI Act is expected to occur in early 2024.

Once the AI Act is formally adopted, organizations will be given a transition period, the length of which will vary depending on the type of AI system they employ. Provisions regarding prohibited AI systems will become enforceable six months after the Act is finalized, while those related to general-purpose AI systems will become enforceable 12 months after that date. The remaining provisions of the AI Act are anticipated to become enforceable in 2026.

Non-compliance with the AI Act carries significant consequences, including substantial fines. Violations of prohibited AI applications can result in fines of up to €35 million or 7% of annual global turnover. Violations of the AI Act’s obligations may incur fines of up to €15 million or 3% of annual global turnover. Additionally, supplying incorrect information to regulators can lead to fines of up to €7.5 million or 1.5% of annual global turnover. These penalties emphasize the seriousness with which non-compliance will be treated under the AI Act.

For inquiries please contact:

RBI Regulatory Advisory

Raiffeisen Bank International AG | Member of RBI Group | Am Stadtpark 9, 1030 Vienna, Austria  | Tel: +43 1 71707 - 5923