Close

Interview with Célia Zolynski, Professor in Private Law and Coordinator of the Artificial Intelligence Observatory at the University of Paris 1 Panthéon-Sorbonne

Celia Zolynski: "We must not separate the AI system from its environment as a whole."

Célia Zolynski
"For Artificial Intelligence to be 'Responsible', we need to analyze not just the algorithmic system itself, but also the various players in the chain who are involved in its manufacture, deployment and use."

Célia Zolynski is the Coordinator of the Artificial Intelligence Observatory at the University of Paris 1 Panthéon-Sorbonne, a member of the French National Pilot Committee for Digital Ethics (CNPEN), and a qualified member of the French National Consultative Commission on Human Rights (CNCDH).

What is your mission at the Paris 1 AI Observatory (AI ObS)?

The Observatory's objective is to enable students and young researchers to build up their skills on subjects relating to AI and to tackle them using an interdisciplinary approach (law, history, philosophy, art, geography, economics, etc.). We also provide information and training on AI to the general public, in order to raise awareness among as many people as possible and ensure a broad understanding of the phenomenon.

We have focused the Observatory's projects around four key types of initiative:

  1. Observe: We organize interdisciplinary seminars, carry out and disseminate scientific monitoring and interviews with the various stakeholders (representatives from public authorities, university researchers, etc.). 
  2. Analyze: We conduct interdisciplinary research projects related to AI.
  3. Train: We create and provide MOOCs and training modules on the societal challenges of AI. We hold doctoral seminars and promote interaction between AI and teaching.
  4. Enlighten: We have set up a panel of experts and organized outreach work. We organize symposiums for public authorities and the general public around topics such as regulation, law, the ethics of artificial intelligence, and its impact on society.

How would you define trusted AI?

Today, AI regulation focuses more specifically on the compliance of AI systems when they enter and are deployed on the European market. If we are to go even further on issues of respect for human rights, we need to establish other monitoring processes. This is also the subject of an ambitious and complex study conducted by the Council of Europe on the impact and analysis of risks as regards respect for human rights.

To bring about trusted AI, we need to analyze not just the algorithmic system but also the various players in the chain, who are involved in the manufacture, deployment and use of this decision support system.

What role can organizations play to encourage Responsible AI?

Every player must effectively be involved and, as such, the role of organizations that use algorithmic systems is decisive. It will be regulated, but any form of initiative aimed at taking the responsible use of algorithmic systems further is extremely important. As current regulation has its limits, we advocate regulation to involve all players in these accountability mechanisms and in the establishment of trustworthy AI.

In practical terms, to ensure that AI systems are acceptable before they are deployed internally, companies must include in their considerations the establishment of a dialog with all involved players (staff representatives, end users, etc.) and the introduction of governance, for example by setting up an internal operational ethics committee. 

How do we get organizations to regulate their algorithmic systems and help them to do so?

For your first point: companies will have to comply with the new regulations, under penalty of sanctions. From our experience with the GDPR, we know that this risk has had a "prophylactic" effect as regards companies' concern about compliance with data protection rights, and has encouraged greater awareness. Hopes are that IA regulation will produce the same effects.

In addition, the employees of these organizations must be supported. The creation of a "compliance AI officer", modeled on the GDPR data protection officer, could be one way forward. This would create a link to the regulator, who would come to check compliance with the adopted regulations. This new figure also appears within the framework of the Digital Services Act (DSA), adopted by the EU at the end of October. At the Observatory, we are working in parallel on a training project for this new profession. so that future "AI compliance officers" can help companies apply the AI regulation and deploy their decision support systems responsibly. Finally, we are interested in employee rights and how to involve and train union representatives as well as all employees.

We are also discussing the audit that will be set up under the IA regulation, and its independence, to ensure that the commitments made by companies are in line with what has been announced. We believe it's essential to have a strong interaction with the regulator.

Finally, another approach is to promote discourse and information sharing with organizations, to include them in the debate and create an open "community". If there are "meta" questions that go beyond the scale of the company, it would be interesting if they could be escalated to national committees, which could come and discuss the issues of responsibility, environment, safety, etc. of the new AI systems to better combat online hate, for example, question the use of HR analytics, and thus counter discrimination and non-respect for human rights.

We could invite organizations to talk to university academics and share our views. We also plan to contribute to codesign workshops in future, to encourage lively debate on how to map out questions before a project is actioned. In practical terms, this could also involve the development of methodologies and analysis grids to help companies – which don't all have the means to set up an operational ethics committee – to ask themselves ethical questions.

At the National Committee for Digital Ethics, we are also considering how to support and "challenge" them to have a broader societal debate that extends beyond the company. This is also in line with the recommendations of the Villani Report.