The AI Act in Europe

the AI Act is a new European legislation that establishes harmonized rules for artificial intelligence (AI) systems within the EU. The primary goal of this legislation is to encourage reliable and human-centred AI applications. Adding tot that, the AI Act protects fundamental rights of citizens, ensures safety and secures a high level of environmental protection. An additional benefit is that this legislation supports the free movement of AI-based goods and services within the internal market.
The AI Act brings new standards and guidelines that your AI systems are required to comply with. This means that, as an organization, you need to strengthen the confidence in your AI solutions while managing the risks of AI-related cyber attacks. In addition, you need to develop strategies to counter unwanted use of AI. Adapting to these new rules in a timely manner is crucial for compliance and making the most of reliable AI technologies.
The AI Act has different obligations depending on the level of risk of the AI system you are using or developing:
- Prohibited AI: Developing, offering and using certain AI systems is strictly prohibited. Violations are severely punished with fines of up to €35 million or up to 7% of annual global turnover.
- High-risk AI: Systems that pose significant risks are subject to extensive obligations. These include mandatory risk analyses, human control, full transparency and mandatory registration of the system.
- Limited risk AI: These systems primarily require transparency obligations, such as clear user notifications on the use of AI.
- Minimal risk AI: No specific obligations apply. However, best practices are strongly recommended.
In addition, AI developers must meet various compliance obligations:
- Preparing detailed documentation explaining how the AI works and how the system is trained.
- Applying ethical and technical standards to avoid bias and discrimination in the AI model.
- Implementation of an effective risk management system specific to AI.
The AI Act is not isolated legislation, but works together with existing EU regulations, such as:
- GDPR: AI systems processing personal data must comply with strict privacy rules.
- NIS2: AI solutions within essential sectors, such as energy and telecoms, must meet cybersecurity standards.
Non-compliance carries significant risks:
- High fines of up to 7% of global turnover or €35 million.
- Possible ban on the use of non-compliant AI systems.
- Serious reputational damage and legal consequences.

How do you ensure compliance with the AI Act?
The AI Act is already in force. Therefore, Opensight advises your organization to take the following steps:
- Map AI use: Identify which AI systems are being used or developed within your organization. Classify these systems by risk level.
- Check specific obligations: Check whether your AI is transparent enough, risks are well managed and documentation is complete.
- Integrate AI risk management: Make AI compliance part of your existing Information Security Management System (ISMS) or Governance, Risk & Compliance (GRC) framework.
- Combine with existing regulations: Ensure integration with GDPR privacy regulations and NIS2 cybersecurity standards.
- Use support tools: Automate compliance processes and ensure proper documentation to make audits hassle-free.
The AI Act is causing sweeping changes within companies that develop, sell or use AI. Therefore, invest timely in reliable and transparent deployment of AI and avoid fines, legal problems and damage to your reputation.