Caroline Lequesne, Associate Professor in Public Law at the Université Côte d'Azur, Director of the Master II Algorithmic Law and Data Governance, and member of Positive AI's independent panel of experts on AI and ethics
Caroline Lequesne is an Associate Professor in Public Law at the Université Côte d’Azur and Director of the Master II in Algorithmic Law and Data Governance.
Responsible AI is conceived, designed and deployed in the service of humans. It must be seen as a collaboration between man and machine, with respect for fundamental rights such as human dignity, respect for privacy, the principle of equality, etc.
It's interesting to note that the ethical debate around responsible artificial intelligence, in the contemporary period, emerged as early as 2016:
There are many issues at stake for society and even civilization. I could mention at least three.
The first challenge is the operationalization of values and fundamental rights within our society and organizations. In very concrete terms, what are the legal processes and protocols that will enable us to guarantee these key principles: human dignity, respect for privacy, gender equality?
The second challenge: how do we ensure that artificial intelligence benefits as many people as possible and is not just a pawn in the race for productivity (produce more, faster and at a lower cost)?
The third challenge: how do we prevent AI from becoming the reign of the "idiot" human? It's essential to think in terms of human-machine collaboration, not to delegate blindly. For example, generative AI like ChatGPT is revolutionizing the world of work. How do we make sure this fits into work environments? That it's deployed in the service of employees, teams, a global virtuous dynamic? Another use case: the algorithmic video surveillance system recently introduced by the French National Assembly for the 2024 Olympic Games. The officers who use it need to develop their skills and receive training so that they can use it and exploit it to the full.
The aspect that needs clarification, and which researchers have been wondering about since the bill was tabled, is how to define this intelligent video surveillance.
The law talks about "anticipating predetermined crimes", abnormal elements, using so-called intelligent video surveillance, but what exactly are these events? What is abnormal behavior in the public space and how can it be identified? One possible avenue is the use of emotional and behavioral recognition, but this is widely criticized as dysfunctional and scientifically unproven, and its uses are likely to raise questions in terms of freedom. In particular, there is a risk of discrimination, as the systems may prove to be stigmatizing. What we also need to be vigilant about with this so-called "experimental" law is that, as well as testing the technology, we need to test a democratic process for Responsible AI: have the checks been carried out, have the officers been trained, have the reports been published and have they enabled the CNIL to make a decision, have the companies been transparent, etc.
On the other hand, if intelligent video surveillance is limited to monitoring abandoned objects and studying crowd density there is less risk, especially as science provides more evidence of its effectiveness. Under these assumptions, the systems seem less risky in terms of freedoms, and therefore more socially acceptable.
My definitive answer is subject to the forthcoming decree, however.
Yes, the AI Act, which is only the beginning of a series of texts, proposes a legislative framework for developing and permitting the use of algorithmic systems.
In this context, the European legislation under discussion provides for a labeling/certification system. Thus, in future access to the market will be conditional upon obtaining certification beforehand that attests to compliance with standards and European legislation.
To verify the feasibility of these regulatory principles on the development of Responsible AI, the European legislator therefore plans to rely to a large extent on private players and certification and standardization bodies. The Positive AI approach acknowledges this progress and is part of this dynamic. Its reference framework for sharing initiatives and best practices aims to contribute to and participate in the debate on standards and technical norms on the matter of ethical and Responsible AI. This type of initiative is therefore essential in the European standard-setting dynamic.
Today, companies are the leading players in the field of innovation. They therefore need to be proactive, to show that they are aware of the risks, and accept their responsibilities in the face of the civilizational challenges posed by AI, as they had to do for the GDPR (General Data Protection Regulation). Ultimately, this will contribute to the social acceptability of these technologies and to the model of society that we want for tomorrow.
It's therefore very interesting to see what these players are doing, how they are anticipating the legislative framework that is being drafted, but also how they are responding to the expectations and concerns of society. We are at a particularly strategic juncture.
I believe firmly in this field approach to research, and as such I am delighted to have joined Positive AI's expert panel, to be present at these discussions and to encourage this kind of initiative.
The two other external experts, Bertrand Braunschweig and Raja Chatila, and myself have chiefly worked on Positive AI's reference framework, to challenge it and optimize it in the light of our respective areas of expertise. We can't wait to work on what comes next, because now is the time.