Cédric Villani: “A label in favor of ethical AI is not the only solution, but it is part of the toolbox.”

20 July 2023
“State regulation of artificial intelligence is essential in order to set rules, standards and bans.”

Cédric Villani, mathematician, Former Deputy in the National Assembly of France and author of the report “For a meaningful artificial intelligence, towards a French and European strategy” (2018)

How do you see AI today, both as a mathematician and as a politician?

For me, AI is not only a pragmatic issue but also a subject of theoretical reflection, in terms of both its algorithms and its social, human and even philosophical implications. In our society, we suffer from many biases, accepted by all, of which we are not necessarily aware. For example, in a trial, the fate of the accused can differ depending on the city, the country, the file, or even the judge’s personality. And these human biases can be amplified by AI. In the United States, recidivism prediction software was denounced for its tendency to place black convicts at a disadvantage. This wasn’t due to the programming but to case law, which was racist. Artificial intelligence can therefore be a tool for raising awareness of our biases. It can enlighten us and help us reconsider our practices and review our prejudices.

What developments have you seen in the field of Responsible AI since the publication of your report?

We note real progress with the establishment of interdisciplinary institutes in AI, the AI Advisory Council, programs on AI ethics at French and international level, and also the Grand Challenges in the field of AI… Since the report was published, the topic of artificial intelligence, which was marginal four years ago, is now being taken into account and addressed by decision-makers. But we still have a long way to go before we can bring about Responsible Artificial Intelligence, which will require cultural changes, human resources… changes linked to society, even more so than technical changes.

In your opinion, how could we regulate AI and implement Responsible AI?

To regulate AI and encourage responsible use, the State must intervene. That is indeed its role, as a political power: to set rules, standards and bans and to enforce them. 

The State must also promote initiatives. Without necessarily getting involved in the development of solutions and technological tools, it must foster the creation of toolboxes and encourage, unite and take these on board. This is why the actions of the community, made up of companies, French research laboratories, consulting firms, etc., are essential. The community provides the necessary technical skills and follows the rules set by the State in order to achieve the objectives.

In your report, you mention the creation of a label. How could this help organizations to make their Artificial Intelligence Responsible?

The creation of international norms and standards is one of the major geopolitical successes of our continent. It’s therefore important that we have labels to regulate AI, as we have for other areas. 

The label is not the only solution, but it is part of it. It enables us to validate the creation and seriousness of the algorithm implementation process, the transparency and confidentiality of databases, the incorporation of feedback, etc.

Finally, the label can also facilitate the sharing of best practices, the creation of standards or the pooling of toolboxes. It is the guarantee that an outside eye has been brought in to verify the entire process: from the chain of motivation through interoperability to its uses.

Orange obtains the Positive AI label

For responsible artificial intelligence Orange has obtained the Positive AI label which recognizes its commitment to ethical and responsible artificial intelligence and certifies its approach to continuous progress. Responsible artificial intelligence is the...

Positive AI Community #3: advances, limits and perspectives of AI

On 22 November 2023, Positive AI organised its 3rd webinar, hosted by Bertrand Braunschweig, member of Positive AI's independent committee of AI and ethics experts and scientific coordinator of the Confiance.ai programme. On this occasion, the expert reviewed the...

Positive AI at Big Data & AI Paris 2023

On 25 and 26 September 2023, Positive AI participated in the 12th edition of the Big Data & AI Paris 2023 exhibition at the Palais des Congrès. This year, the event focused on the regulatory issues that arise as artificial intelligence evolves.  Helping...

Giskard joins Positive AI!

On December 13, 2023, Giskard, a French startup specializing in evaluating and testing the quality of generative artificial intelligence models, joined Positive AI. By joining our association, Giskard reaffirms its commitment to Responsible Artificial Intelligence.As...

Positive AI at the Acsel 2023 Legal Club – [Replay]

On 4 October, Positive AI took part in the 100% digital round table entitled "AI ACT: but how to regulate so much intelligence?", organised by the Legal Club of the Acsel numerical economy association. Philippe Streiff, General Delegate of Positive AI and Product...

Positive AI Community #2: The impact of generative AI at work

On September 21, the Positive AI Community gathered around a webinar on the impact of generative AI, which brought together more than 230 employees from the association's member companies. On this occasion, François Candelon, Senior Associate Director and World...

Positive AI Community #1: Looking back at the webinar on Generative AI in business

On June 23, the Positive AI Community held its first event: a webinar that brought together more than a hundred employees from member companies of the association. At this event, Faycal Boujemaa, Technology Strategist and Engineer at Orange's Innovation, Data and AI...

First Positive AI breakfast with DataScientest: making learning Responsible AI accessible in business

On June 6, Positive AI and DataScientest, a data science training organization, jointly organized a breakfast to discuss the hot topic of Responsible AI and corporate training. Positive AI and DataScientest: how to develop ethical AI? This event, which brought...

Cédric Villani: “In the ecological transition, AI is part of the solution but is by no means THE solution.”

"Referring to Responsible Artificial Intelligence can be misleading. It's up to the designers of algorithmic systems to be responsible." Cédric Villani, mathematician, Former Deputy in the National Assembly of France and author of the report "For a meaningful...