Interview with Caroline Lequesne, member of the Independent Committee of Experts in AI and Ethics of Positive AI

19 March 2024

“Responsible AI must be thought out, designed and deployed in the service of humans. It is essential to think about human-machine collaboration and not to delegate blindly.”

Caroline Lequesne is an Associate Professor in Public Law at the Université Côte d’Azur and Director of the Master II in Algorithmic Law and Data Governance.

How would you define Responsible AI?

Responsible AI is conceived, designed and deployed in the service of humans. It must be seen as a collaboration between man and machine, with respect for fundamental rights such as human dignity, respect for privacy, the principle of equality, etc.

It’s interesting to note that the ethical debate around responsible artificial intelligence, in the contemporary period, emerged as early as 2016:

  • Initially, the debate was fueled by discourse with philosophers and focused more on the fundamental values of AI;
  • Then, in the United States in 2018, data scientists warned of the biases, errors, discrimination, and problems generated by its use. They then introduced the notion of ethics in engineering;
  • Finally, in 2020, we realized that the challenges of AI go well beyond the question of bias and that AI requires a legal framework that provides points of reference and takes account of risks. In Europe, the response has revolved around the AI Act. In China, a legislative movement is under way, while in the United States, it’s the Federal Trade Commission (FTC) that is tackling the issue. 

What do you see as the greatest challenges in establishing this Responsible AI?

There are many issues at stake for society and even civilization. I could mention at least three.  

The first challenge is the operationalization of values and fundamental rights within our society and organizations. In very concrete terms, what are the legal processes and protocols that will enable us to guarantee these key principles: human dignity, respect for privacy, gender equality? 

The second challenge: how do we ensure that artificial intelligence benefits as many people as possible and is not just a pawn in the race for productivity (produce more, faster and at a lower cost)? 

The third challenge: how do we prevent AI from becoming the reign of the “idiot” human? It’s essential to think in terms of human-machine collaboration, not to delegate blindly. For example, generative AI like ChatGPT is revolutionizing the world of work. How do we make sure this fits into work environments? That it’s deployed in the service of employees, teams, a global virtuous dynamic? Another use case: the algorithmic video surveillance system recently introduced by the French National Assembly for the 2024 Olympic Games. The officers who use it need to develop their skills and receive training so that they can use it and exploit it to the full. 

What is your opinion on the topic of video surveillance and how is this an ethical issue? 

The aspect that needs clarification, and which researchers have been wondering about since the bill was tabled, is how to define this intelligent video surveillance. 

The law talks about “anticipating predetermined crimes”, abnormal elements, using so-called intelligent video surveillance, but what exactly are these events? What is abnormal behavior in the public space and how can it be identified? One possible avenue is the use of emotional and behavioral recognition, but this is widely criticized as dysfunctional and scientifically unproven, and its uses are likely to raise questions in terms of freedom. In particular, there is a risk of discrimination, as the systems may prove to be stigmatizing. What we also need to be vigilant about with this so-called “experimental” law is that, as well as testing the technology, we need to test a democratic process for Responsible AI: have the checks been carried out, have the officers been trained, have the reports been published and have they enabled the CNIL to make a decision, have the companies been transparent, etc. 

On the other hand, if intelligent video surveillance is limited to monitoring abandoned objects and studying crowd density there is less risk, especially as science provides more evidence of its effectiveness. Under these assumptions, the systems seem less risky in terms of freedoms, and therefore more socially acceptable. 

My definitive answer is subject to the forthcoming decree, however. 

Should the legislative framework, in France or at European level with the AI Act, strengthen regulation?

Yes, the AI Act, which is only the beginning of a series of texts, proposes a legislative framework for developing and permitting the use of algorithmic systems. 

In this context, the European legislation under discussion provides for a labeling/certification system. Thus, in future access to the market will be conditional upon obtaining certification beforehand that attests to compliance with standards and European legislation. 

To verify the feasibility of these regulatory principles on the development of Responsible AI, the European legislator therefore plans to rely to a large extent on private players and certification and standardization bodies. The Positive AI approach acknowledges this progress and is part of this dynamic. Its reference framework for sharing initiatives and best practices aims to contribute to and participate in the debate on standards and technical norms on the matter of ethical and Responsible AI. This type of initiative is therefore essential in the European standard-setting dynamic.

Why did you agree to become a member of Positive AI’s independent panel of experts on AI and ethics? What is your mission?

Today, companies are the leading players in the field of innovation. They therefore need to be proactive, to show that they are aware of the risks, and accept their responsibilities in the face of the civilizational challenges posed by AI, as they had to do for the GDPR (General Data Protection Regulation). Ultimately, this will contribute to the social acceptability of these technologies and to the model of society that we want for tomorrow. 

It’s therefore very interesting to see what these players are doing, how they are anticipating the legislative framework that is being drafted, but also how they are responding to the expectations and concerns of society. We are at a particularly strategic juncture. 

I believe firmly in this field approach to research, and as such I am delighted to have joined Positive AI’s expert panel, to be present at these discussions and to encourage this kind of initiative. 

The two other external experts, Bertrand Braunschweig and Raja Chatila, and myself have chiefly worked on Positive AI’s reference framework, to challenge it and optimize it in the light of our respective areas of expertise. We can’t wait to work on what comes next, because now is the time.

Interview with Raja Chatila, member of the Independent Committee of Experts in AI and Ethics of Positive AI

"Responsible AI must be considered throughout the value chain, from its design to its use. Like any technology, and even more particularly this one, AI must be developed with responsibility. But who is responsible?"Raja Chatila is Professor Emeritus of Artificial...

Interview with Bertrand Braunschweig, member of the Independent Committee of Experts in AI and Ethics of Positive AI

"It's by investing in research that we will find solutions that will enable us to reduce the error rate of AI and therefore limit the associated risks."Bertrand Braunschweig is the Scientific Coordinator of the Confiance.ai program at the SystemX Institute for...