Julien Chiaroni, Director of the Grand Challenge in Artificial Intelligence at the General Secretariat for Investment (SGPI)

Julien Chiaroni: "We need the collective to work together to define trustworthy AI."

photo de Julien Chiaroni
"In my view, Responsible Artificial Intelligence responds to a set of shared values and specific characteristics: robustness, security, transparency, respect for privacy, etc. It must be measurable using standardized indicators."

What is your mission in the Grand Challenge?

In the Grand Challenge, our approach is collective, holistic and agile. It focuses on three main areas:

  1. Norms and standards: We specify these so that they can be operationalized and, in particular, ensure compliance with future European regulations.
  2. Assessment and compliance: We work on audits, compliance assessments, certifications and approvals to ensure that end users can have trust in the products and services put on the market.
  3. Technologies: We develop tools, software and methods that contribute to the design of trusted AI.

Can you elaborate on this idea of “trusted AI” or the “AI of the Enlightenment”, as mentioned in your report?

This debate around trust is not confined to AI. It has already been used in the past to validate the benefits and best practices of other technologies. To answer these questions, Europe is now using regulation to push for AI that guarantees products and services that meet society's expectations in terms of transparency, safety and responsibility. Therefore, this kind of AI is not characterized solely by data, but also by values shared among European citizens. It's this AI that we call the "AI of the Enlightenment" in the report I co-authored with Arno Pons from the Digital New Deal Think Tank.

To accompany this momentum and change the current perception of AI, we also need to demystify the sci-fi image associated with it, and to clarify what it is capable of doing. Admittedly, AI can do a lot of things, but it can't do everything. Besides, there isn't one single AI, but many. Education, information sharing and training are therefore essential.

To carry out this work and continue to refine this trusted AI, we need to consolidate an ecosystem of players engaged in a co-constructivist approach. The collective enables us to break silos, share multidisciplinary knowledge, and build a common language of ethical AI that can be applied in all fields.

We have created a collective. To date, we have brought together more than 50 partners (industrialists, major groups, start-ups and SMEs, academics) and are continuing to broaden the spectrum of activities and skills.

Our open community discusses and draws on all points of view: from business leaders who use AI, start-up managers, data experts, and researchers in the humanities and social sciences.

How do we convince organizations to regulate their AI?

To develop AI more widely in a responsible manner, the three main obstacles companies encounter must be removed:

  • The cost of compliance: This is real but can be offset by the advantages brought by Responsible Artificial Intelligence in terms of business and image.
  • The new organization needed to transform: To introduce trust and optimize their systems, processes, methodologies and partnerships absolutely must be established.
  • Engagement in discussions outside their area of expertise: Sharing problems with other players to find solutions together takes time but helps all the companies involved move forward.

At the same time, a balance must be maintained to continue innovating.

Would the creation of a label be the best way to regulate AI without restricting innovation?

Yes, some players do see the label as a limit to innovation. Indeed, it is a set of constraints. But it can be transformed into a set of opportunities for academic players, for whom this opens up new fields of research, and also for industrial players, for whom the label boosts competitiveness. A label can indeed have real business value. For example, if a labeled company can prove that its recruitment application is not discriminatory, this gives it a real competitive edge.

The label can therefore be a way of regulating AI whilst allowing organizations to find the right balance between regulation, innovation and business.