In 2023, the French government launched Alberta generative and sovereign AI designed to improve the efficiency and quality of citizen services. With the increasing use of artificial intelligence in public institutions, it is crucial to understand its applications and limitations. The aim of this article is to explain the different uses of AI in French public institutions, highlighting both its advantages and its limitations.
What role does AI play in French public institutions?
Experimenting with generative AI in public services
In October 2023, Stanislas Guerini launched the first experiment in the use of generative AI in public services. Around 1,000 volunteer agents used this AI to draft responses to users' online opinions and comments, as part of the Services Publics+ program. The results show that AI has improved the responsiveness of public services: one in two responses is facilitated by AI, the average response time has fallen from 7 days to 3 days, and 70% of agents have a positive impression of the tool, while 74% of users say they are satisfied with the responses obtained.
Albert tool development
At the same time DINUM develops Albert, a sovereign, free and open generative AI tool, created by and for public officials. Albert offers personalized answers and source transparency, facilitating access for all administrations. In the coming months, Albert will be deployed in the France Services network, with volunteer advisors.
Developing the use of French in AI
Jean-Noël Barrot, former Minister Delegate for Digital Affairs, also supports the creation of a French-language data hub, named Villers-Cotterêtdesigned to increase the presence of French in AI models. Currently, less than 0.2% of AI model training data is in French. This project aims to strengthen French digital sovereignty by developing AI that reflects French culture.
Public-Private Alliance to strengthen digital sovereignty
The Incubator Alliance, managed by DINUM, brings together members of various government agencies, companies and research institutions to develop sovereign, open-source AI products. For example, the CamemBERT, developed by INRIA, is widely used to optimize the management of customer requests at ENEDIS, demonstrating a significant return on investment.
However, there are certain limitations inherent in generative AIs
Generative AIs, although they are useful in some cases, they are not suitable for all uses. Indeed, we may well question the will to implement generative AI in public institutions, given the limitations that remain.
Impact environnemental
Generative AI models are resource-intensive and consume a lot of energy to train and use. This raises ecological concerns, especially in a context where the sustainability of GreenIT and reducing carbon emissions are global priorities.
Vulnerability to attack
Generative AIs can be manipulated by malicious actors to produce false information or misleading content, which could be used to influence public opinion or undermine trust in public institutions. What's more, these systems can be the target of cyber-attacks aimed at altering their operation or stealing sensitive data. What's more, within these LLMs we find what are known as “sleeper agents” - malicious agents designed to make the model dangerous, which can either be integrated by an external individual, or by the LLM itself. We'll be talking about these in a future article.
Security and data protection
Generative AI systems often require access to vast amounts of data, including personal data, to operate effectively. This raises significant data protection and privacy concerns. For example, the use of the contact tracking application Aarogya Setu in India has raised concerns about the security of the data collected and the possibility of privacy breaches.
Lack of transparency and explicability
Generative AI models are often referred to as “black boxes” because of the difficulty of explaining their decision-making processes. In other words, we can't explain why the model gave one answer rather than another. In the context of public services, this poses a major problem of transparency and accountability, as citizens and public officials need to be able to understand how and why a decision was taken, to ensure fairness and legitimacy.
Trusted AI as a solution
Sovereign AI is not enough to be trusted AI.Trustworthy AI focuses on several key pillars that must be integrated right from the system design stage:
Unbiased : An ethical AI system makes fair, non-discriminatory decisions.
Clarity and transparency It's crucial that AI decision-making processes are comprehensible to all end-users. This means understanding why the AI gave one answer rather than another.
Security : AI systems must be designed to be resistant to cyber-attacks and secure against external intrusions. Practices such as data encryption and robust security protocols are essential.
Privacy policy : AI must be programmed to minimize the data collected and processed, in adherence with laws such as the RGPD in Europe. (Cf: CNIL publishes its first recommendations on AI)
Determinism AI systems must work consistently and predictably. Errors must be minimized through rigorous testing and code reviews.
Frugality A trusted AI is also a frugal AI. In other words, it doesn't need large quantities of data to be efficient. And when we know that the problems of Green IT are becoming increasingly important, this is an important criterion for a trusted AI.
Today, only Symbolic analytical AIs meet all the criteria of a trusted AI, thanks to their predictability and explicability. But generative AIs, because of their hallucinations, cannot yet be so. Which explains why 70% of French people want AI they can trust, according to the AI Impact 2023 Barometer.
Ultimately, the integration of generative AI, like Albert in France, promises to improve the efficiency of public services. However, it also poses a paradox: how can we reconcile AI efficiency with the need to maintain sovereign and trusted AI? On the one hand, generative AI certainly enables efficiency gains, but on the other, it raises concerns about safety, explicability, vulnerability and environmental impact.
To overcome this paradox, public institutions need to develop solutions that are certainly sovereign, but above all trustworthy. By adopting AI that respects ethical and safety principles, it is possible to maximize benefits while minimizing risks, thus ensuring responsible and beneficial use of AI in the public sector. In this sense, the choice of symbolic analytic AI, or its combination with generative AI, might have been more appropriate.
And if you're a company that wants to handle your incoming message flows, with an AI you can trust, Contact us !