AI Act: the latest news?

The Artificial Intelligence Regulation (AI Act) is a regulatory initiative of the European Union, introduced on 21 April 2021 by the European Commission. Its main objective is to set up a harmonized legal framework to regulate artificial intelligence applications, while categorizing them according to their level of risk (minimal, limited, high, or unacceptable). On Wednesday, 13 March 2024, the European Parliament ratified legislation to regulate the use of AI. This legislative framework, designed to safeguard fundamental rights and ensure public safety while promoting innovation, was adopted by Members of the European Parliament in a plenary vote, with 523 votes in favour against 46 oppositions. Therefore, this article aims to explain what is new in the AI Act and the choice of the right technology that this induces.

What’s new in the final text of the AI Act

1. Categorization of specific risks and obligations

The final text specifies the categorization based on the risk of AI systems, distinguishing applications based on their potential for harm. This ranges from minimal risk systems to high risk systems, with specific obligations for each category, including transparency, human oversight and data management.

2. Documentation and transparency for general AI models

An important novelty is the technical documentation and transparency requirements for general AI models, such as language models. Developers now need to detail how these models were built, how they respect copyright and what data was used for their training.

3. Exemptions for open source models

The final text introduces an exemption for open source AI models, thus alleviating some of the regulatory obligations for these projects, provided they openly share the details of their design and operation.

4. AI Prohibitions and Uses

The text clarifies and extends prohibitions on certain uses of AI that are considered to pose an unacceptable risk, such as mass biometric surveillance and social scoring systems. It also details the conditions under which high-risk AI systems can be deployed.

5. Governance and Implementation

The final text establishes a European AI Office to coordinate compliance and enforcement, marking an effort to centralize oversight and enforcement of the rules within the European Union.

These adjustments reflect the EU’s intention to create a regulatory framework that promotes both innovation and confidence in AI development, while protecting European citizens and values. The final text of the AI Act marks an important step in the regulation of artificial intelligence, setting a precedent that could influence global standards in this area.

Choosing the right technology 

Non-compliance may result in fines ranging from 7.5 million euros or 1.5% of turnover to 35 millions d’euros or 7% of global sales, depending on the offence and the size of the business. Therefore, the choice of the right technology is crucial as it is necessary to choose an AI system that is not subject to the aforementioned risks.

Based on these different risks, let’s define some concepts that seem to us important to choose the appropriate technology the AI Act regulations: 

Explicability : Some generative IA models are sometimes not fully explainable. That is, one cannot always explain why AI gave one answer over another. This lack of transparency is a limited risk factor under the AI Act. This is why it seems important to favor explainable IAs models.

Respect des données personnelles : Nous savons également que l’AI Act estime à risques élevés les systèmes d’IA qui peuvent affecter négativement les droits des personnes. Or, pour les systèmes d’IAs nécessitant des données d’entraînement, on ne peut y mettre des données sensibles/classifiées (Santé, Défense…) sans atteindre les droits des personnes. C’est pourquoi, si vous traitez des données sensibles et/ou classifiées, il faut choisir un modèle d’IA qui n’a pas besoin de ces données pour être performant.

Easy supervision : Si l’AI Act continue d’être modifié et/ou ajusté, il peut être particulièrement utile d’avoir un modèle d’IA qui soit facilement contrôlable pour répondre rapidement aux exigences réglementaires changeantes ou pour rectifier des décisions erronées. Plus encore, puisque l’AI Act incite à la supervision constante d’un humain, il faut que cette supervision soit facilitée pour qu’elle soit la plus optimale possible.

Thus, the risk-based approach advocated by the AI Act involves decision-makers choosing the right technology that minimizes these risks. And this is where the‘symbolic analytical artificial intelligence’. Indeed, by its character explainable, respecting personal datas and easy supervision Symbolic analytical AI appears to be a solution compatible with the requirements of the AI Act.
Want to process your messages with a trusted AI?  Contact us !

And to know the most appropriate AI model for your use, answer our form !