Cédric Villani, mathematician, Former Deputy in the National Assembly of France and author of the report “For a meaningful artificial intelligence, towards a French and European strategy” (2018)
How do you see AI today, both as a mathematician and as a politician?
For me, AI is not only a pragmatic issue but also a subject of theoretical reflection, in terms of both its algorithms and its social, human and even philosophical implications. In our society, we suffer from many biases, accepted by all, of which we are not necessarily aware. For example, in a trial, the fate of the accused can differ depending on the city, the country, the file, or even the judge’s personality. And these human biases can be amplified by AI. In the United States, recidivism prediction software was denounced for its tendency to place black convicts at a disadvantage. This wasn’t due to the programming but to case law, which was racist. Artificial intelligence can therefore be a tool for raising awareness of our biases. It can enlighten us and help us reconsider our practices and review our prejudices.
What developments have you seen in the field of Responsible AI since the publication of your report?
We note real progress with the establishment of interdisciplinary institutes in AI, the AI Advisory Council, programs on AI ethics at French and international level, and also the Grand Challenges in the field of AI… Since the report was published, the topic of artificial intelligence, which was marginal four years ago, is now being taken into account and addressed by decision-makers. But we still have a long way to go before we can bring about Responsible Artificial Intelligence, which will require cultural changes, human resources… changes linked to society, even more so than technical changes.
In your opinion, how could we regulate AI and implement Responsible AI?
To regulate AI and encourage responsible use, the State must intervene. That is indeed its role, as a political power: to set rules, standards and bans and to enforce them.
The State must also promote initiatives. Without necessarily getting involved in the development of solutions and technological tools, it must foster the creation of toolboxes and encourage, unite and take these on board. This is why the actions of the community, made up of companies, French research laboratories, consulting firms, etc., are essential. The community provides the necessary technical skills and follows the rules set by the State in order to achieve the objectives.
In your report, you mention the creation of a label. How could this help organizations to make their Artificial Intelligence Responsible?
The creation of international norms and standards is one of the major geopolitical successes of our continent. It’s therefore important that we have labels to regulate AI, as we have for other areas.
The label is not the only solution, but it is part of it. It enables us to validate the creation and seriousness of the algorithm implementation process, the transparency and confidentiality of databases, the incorporation of feedback, etc.
Finally, the label can also facilitate the sharing of best practices, the creation of standards or the pooling of toolboxes. It is the guarantee that an outside eye has been brought in to verify the entire process: from the chain of motivation through interoperability to its uses.