Interview with Raja Chatila, member of the Independent Committee of Experts in AI and Ethics of Positive AI

19 March 2024

“Responsible AI must be considered throughout the value chain, from its design to its use. Like any technology, and even more particularly this one, AI must be developed with responsibility. But who is responsible?”

Raja Chatila is Professor Emeritus of Artificial Intelligence, Robotics and IT Ethics at Sorbonne University and former Director of the Institute of Intelligent Systems and Robotics (ISIR-CNRS).

How would you define Responsible AI?

Responsible AI is artificial intelligence that must be designed responsibly by humans in a way that respects human values and the environment, throughout the systems’ life cycle. All stakeholders involved must be responsible.

Like biologists, who have been concerned about ethics for years and have banned human cloning, for example, IT professionals cannot ignore responsibility. It is an essential element in the regulation of AI. They must be aware that what they are doing is not neutral and that the consequences and risks can be significant.

A few days ago, Google CEO Sundar Pichai shared his concerns about AI being “badly deployed” and said that “generative AI [should] be regulated like nuclear weapons”.

In your opinion, what are the greatest challenges today for AI?

Artificial intelligence is constantly progressing and its uniqueness lies in the rapid pace at which it is evolving and in the fact that, unlike other scientific fields, industrialists, particularly those in Silicon Valley with their considerable financial resources, are playing a major role in research. The time between design, development and commercial deployment is very short. We’re no longer talking about the timeframe we were accustomed to, where several years elapsed between mainly academic research and industrial commercialization. This speed – you could call it a race between the designers of these systems – is changing society faster than our ability to understand and assimilate the challenges of this economic and social transformation.

Let’s take a concrete example: Open AI produced and rapidly launched ChatGPT, and several million users connected to it on day one. This technology, which now makes it possible to produce speech resembling human speech, was not designed for any particular application. Tomorrow, it could create images and videos and be even more convincing and more powerful. It raises questions about ethics. And ethics has a much longer timeframe. We need to take the time to reflect on the foundations of algorithmic systems, their consequences, and the values they may call into question. The socio-economic stakes are therefore twofold.

Second challenge: the legislative and legal issue. We need to agree on the legal definition of artificial intelligence, which is supposed to simulate human intelligence. In order to legislate, we need an operational definition of what this term covers, and we need to define human responsibilities if legislation is to be effective.

You recently signed the moratorium to “pause AI”. Why do we need to slow down the “dangerous race” towards ever more capable AI systems?

It’s not about pausing AI, but about pausing the development of systems more powerful than ChatGPT4. I signed this moratorium, but that doesn’t mean I agree with every word or every idea, and far less with every signatory. My goal was to make people aware that there was a problem. It’s not about being negative towards artificial intelligence, but of raising awareness about how to develop and use these technologies. This is the whole idea behind responsibility. It’s important that we stop today and raise the red card, and ask our democratic societies the right questions: what is it that we are doing? Should we put a stop to generative systems? The moratorium will certainly not be followed by action and a six-month break would not change the situation, but it was essential to raise awareness among the general public and decision-makers, and it worked.

We now need to build on this success and make some constructive proposals. I also signed another petition, much less publicized, which proposes elements to be included in future regulations.

What safeguards do you think should be put in place in the development of future generative AI systems?

Several schemes already exist. In China, a draft regulation specifically on generative AI systems was recently published. We can draw some inspiration from this. At European level, the AI Act currently being drafted offers a legislative framework that is solid but still needs to be clarified, particularly as regards liability issues, and it will only come into force in about two years.

The AI Act is based on the risk posed by the “intended” use of the system (“risk-based approach”), i.e. the risk arising from its application. For example, the risk to health in the case of medical devices, to personal integrity in the case of transport, and even to human rights in the case of surveillance, facial recognition, and recruitment. Several levels of risk are then defined:

– If the risk is “unacceptable”, the system is banned from being placed on the market;

– If the risk is “high”, prior certification is required;

– If the risk is “medium”, a certain level of transparency is required.

However, generative AI – ChatGPT in this case – does not fit perfectly into this framework and these levels of application. Regulation cannot be based solely on usage and must not ignore the way in which the value chain is constructed in order to establish responsibilities.

Why did you agree to become a member of Positive AI’s independent panel of experts on AI and ethics? What is your mission?

From 2018 to 2020, I belonged to the High-Level Expert Group on Artificial Intelligence, which gave the European Commission recommendations for trustworthy AI. I am a member of the French National Pilot Committee for Digital Ethics and co-chair of the Responsible AI Working Group of the Global Partnership in AI. I am also continuing my research work, particularly on the issues of machine learning and human-computer interaction. Therefore, I naturally wanted to apply this expertise in a practical context with Positive AI.

This is a positive initiative, because companies are anticipating future legislative constraints and voluntarily taking on their share of responsibility in order to establish the necessary mechanisms and processes that will make AI systems compatible, reliable, robust and respectful of fundamental values (security, privacy, human control, etc.). I would also encourage them to carry out research in order to master and understand the subject, and produce responsibly. They are the most important contributors where these issues are concerned. And the standards that they are going to define, test and disseminate in order to certify one system or another are valuable and practical tools on which the AI Act can be based.

For the time being, I and the two other members on the panel of experts, Bertrand Braunschweig and Caroline Lequesne Roth, have given our opinion on the Positive AI reference framework that has been defined, shared our criticisms and identified shortcomings. We are very curious and enthusiastic about following this label and seeing how it evolves.

Interview with Caroline Lequesne, member of the Independent Committee of Experts in AI and Ethics of Positive AI

"Responsible AI must be thought out, designed and deployed in the service of humans. It is essential to think about human-machine collaboration and not to delegate blindly."Caroline Lequesne is an Associate Professor in Public Law at the Université Côte d’Azur and...

Interview with Bertrand Braunschweig, member of the Independent Committee of Experts in AI and Ethics of Positive AI

"It's by investing in research that we will find solutions that will enable us to reduce the error rate of AI and therefore limit the associated risks."Bertrand Braunschweig is the Scientific Coordinator of the Confiance.ai program at the SystemX Institute for...