Cédric Villani: “In the ecological transition, AI is part of the solution but is by no means THE solution.”

29 June 2023
“Referring to Responsible Artificial Intelligence can be misleading. It’s up to the designers of algorithmic systems to be responsible.”

Cédric Villani, mathematician, Former Deputy in the National Assembly of France and author of the report “For a meaningful artificial intelligence, towards a French and European strategy” (2018)

In your report, you tackle the question of AI ethics. How would you define Responsible Artificial Intelligence?

For AI to be responsible, we need to put in place responsible manufacturing, implementation and control processes. Responsibility lies in the hands of humankind. We need to prevent a procedure that is not sufficiently rigorous for society or the individual from resulting in the “irresponsible” use of AI. It’s therefore essential that we question not just the purpose but also the architecture of algorithmic systems: their transparency, their empirical evaluation, their biases, and their response to environmental issues. We need responsible people, but also well-calibrated processes and relevant indicators. When we address the idea of AI responsibility, we must take into account its ecological footprint.

Your report does indeed talk about the role of AI in the ecological transition. How can AI be used as a lever?

Our report was, I believe, the first with an international ambition to explicitly address the topic of ecology and to show the bright side and the dark side of AI in this field. AI can take part in the ecological transition, from the assessment of biodiversity to the efficiency of turbines on wind farms, and by optimizing energy consumption or water networks. Unfortunately, however, it can also help all the enemies of the ecological transition. Algorithmic systems can support the interests of companies that exploit fossil fuels, add value to the most polluting cars, or help certain large industrialists to oppress their subcontractors and their staff. Regulation is essential in order to set rules, standards and bans. And in addition to this regulation, we need quality human resources.

More specifically, what are the ecological challenges that AI must meet?

AI has a significant ecological impact through its data storage infrastructures, pollution linked to mining, consumption of resources, energy, and even space. Data centers today take up a considerable and rapidly growing surface area. I am therefore in favor of digital sobriety, and limiting the resources allocated to digital technology.

But one of the major challenges of AI is to refrain from giving it a role that it cannot accomplish alone. To avoid facing the “real” problems, players (governments, institutions, companies, etc.) tend to divert attention and point the finger at the algorithm, which often turns out to be a minor problem.

If we apply AI to the agricultural sector, it can help with optimization, accounting, detecting animal suffering, etc. In practical terms, this help translates to extra days off and additional income, which will make the farmer’s everyday life easier. So, AI is at the service of humans, but it is not a revolution. To be able to feed billions of human beings whilst preserving the planet, first and foremost we need to raise questions about chemistry, biology, breeding models, consumption habits, the use of fertilizers, pesticides, etc. and make enlightened and courageous choices. In no way does AI exempt us from these deliberations.

Positive AI awards its Responsible AI label to Malakoff Humanis

Malakoff Humanis has been awarded the Positive AI label, further strengthening its commitment to ethical and responsible artificial intelligence. This label, granted by the Positive AI association, recognizes the company's proactive efforts to integrate ethical...

Orange obtains the Positive AI label

For responsible artificial intelligence Orange has obtained the Positive AI label which recognizes its commitment to ethical and responsible artificial intelligence and certifies its approach to continuous progress. Responsible artificial intelligence is the...

Positive AI Community #3: advances, limits and perspectives of AI

On 22 November 2023, Positive AI organised its 3rd webinar, hosted by Bertrand Braunschweig, member of Positive AI's independent committee of AI and ethics experts and scientific coordinator of the Confiance.ai programme. On this occasion, the expert reviewed the...

Positive AI at Big Data & AI Paris 2023

On 25 and 26 September 2023, Positive AI participated in the 12th edition of the Big Data & AI Paris 2023 exhibition at the Palais des Congrès. This year, the event focused on the regulatory issues that arise as artificial intelligence evolves.  Helping...

Giskard joins Positive AI!

On December 13, 2023, Giskard, a French startup specializing in evaluating and testing the quality of generative artificial intelligence models, joined Positive AI. By joining our association, Giskard reaffirms its commitment to Responsible Artificial Intelligence.As...

Positive AI at the Acsel 2023 Legal Club – [Replay]

On 4 October, Positive AI took part in the 100% digital round table entitled "AI ACT: but how to regulate so much intelligence?", organised by the Legal Club of the Acsel numerical economy association. Philippe Streiff, General Delegate of Positive AI and Product...

Positive AI Community #2: The impact of generative AI at work

On September 21, the Positive AI Community gathered around a webinar on the impact of generative AI, which brought together more than 230 employees from the association's member companies. On this occasion, François Candelon, Senior Associate Director and World...

Positive AI Community #1: Looking back at the webinar on Generative AI in business

On June 23, the Positive AI Community held its first event: a webinar that brought together more than a hundred employees from member companies of the association. At this event, Faycal Boujemaa, Technology Strategist and Engineer at Orange's Innovation, Data and AI...

Cédric Villani: “A label in favor of ethical AI is not the only solution, but it is part of the toolbox.”

“State regulation of artificial intelligence is essential in order to set rules, standards and bans.” Cédric Villani, mathematician, Former Deputy in the National Assembly of France and author of the report "For a meaningful artificial intelligence, towards a French...

First Positive AI breakfast with DataScientest: making learning Responsible AI accessible in business

On June 6, Positive AI and DataScientest, a data science training organization, jointly organized a breakfast to discuss the hot topic of Responsible AI and corporate training. Positive AI and DataScientest: how to develop ethical AI? This event, which brought...