Interview with Bertrand Braunschweig, member of the Independent Committee of Experts in AI and Ethics of Positive AI

19 March 2024

“It’s by investing in research that we will find solutions that will enable us to reduce the error rate of AI and therefore limit the associated risks.”

Bertrand Braunschweig is the Scientific Coordinator of the Confiance.ai program at the SystemX Institute for Technological Research

How long have you been working on artificial intelligence, and on the subject of trust in particular?

As a researcher in system dynamics, I started working on artificial intelligence in 1987. I managed AI research activities at IFP Energies Nouvelles, chaired the French Association for Artificial Intelligence, and then joined the French National Research Agency (ANR) in 2006 as a program manager, then became Head of the Department of Communication and Information Science and Technology (STIC). In particular, I coordinated the publication of two books on AI: the ANR’s booklet on AI in 2012 and the white paper on artificial intelligence by the French National Institute for Research in Digital Science and Technology, which was initially published in 2016 followed by a second version in 2021. 

Since 2015, I have believed that trust is key to the success AI. Even though, with some of its applications, such as the recommendation of tailor-made content (films, music, advertisements, etc.), an error is not necessarily serious, it can be if there is a human, economic or environmental issue at stake: driving an autonomous vehicle, supervising a nuclear power plant, detecting skin cancer, etc. In cases like these, the stability, robustness, interpretability, explainability and certification of AI systems are essential for minimizing errors and their impact. The European Commission talks about “high-risk systems”, in which it includes numerous applications for people (employment, justice, access to services, etc.), but also critical industrial infrastructures. 

How would you define trusted AI?

Trust is a rich and multifaceted issue. Trusted AI is AI that meets technological, human and societal requirements. It is created by combining several factors: 

  • Technology: the system must be robust, reliable, used in its field of application, etc.;
  • Interaction with humans: transparency, explainability, and the quality of exchanges are essential to ensure that people maintain control of AI systems;
  • Ethics: AI must take account of diversity, respect for minorities, the absence of bias, and respect for the environment;
  • Validation by a trusted third party: such as a label guaranteeing that an authority has validated the responsible aspects of AI.

My mission as scientific coordinator is to serve the Confiance.ai program, which is developing technologies to increase trust in AI in critical systems (aeronautics, transport, energy, defense, security, etc.). To date, we have developed, tested and approved around a hundred components and pieces of software to meet the need for trust among industrial players in specific use cases such as weld verification, product consumption forecasts, and predictive maintenance in aviation. Over and above the forthcoming regulations, the introduction of trusted AI systems represents a significant productivity gain for them over the entire industrial cycle. 

Following the request from thousands of experts for a six-month moratorium on research into and training of AI systems more powerful than GPT-4, do you remain confident on this question of Responsible AI?

Yes, we will get there. Nevertheless, trusted AI is a very long-term project. 

To take a well-known example, we are still marked by the death of a pedestrian in the United States in 2018, who was run over by a self-driving car. As César A. Hidalgo explains in his book, “How Humans Judge Machines”, we don’t expect the same things from humans as we do from machines – we do not accept errors by the latter. Despite the benefits that AI could bring for road safety (potentially reducing the number of deaths per year on French roads to one tenth), we must proceed with caution given the dramatic human consequences of certain types of AI, which have a lasting detrimental effect on the image of the associated innovations.

I am convinced that it’s by investing in research that we will find solutions (software, algorithmic, hardware, hybrid, etc.) that will allow us to reduce the error rate of AI, and therefore limit the associated risks. 

The definition of standards also helps with this. One example of this is the AI Act, in which I am helping to draft harmonized standards. The particular challenge of this regulation, which is currently under discussion at the European Parliament, is to establish requirements that will help all stakeholders (developers, users, experts, etc.) to benefit from trustworthy AI.

Finally, why did you agree to become a member of Positive AI’s independent panel of experts on AI and ethics? What is your mission?

Given my experience and investment in the subject of trusted AI for many years, I naturally agreed to join this panel at Positive AI. I’m contributing all the knowledge I’ve acquired, particularly in the Confiance.ai program, and I’m also sharing the progress and discussions I’m taking part in at European level to help Positive AI finalize its reference framework. I think it’s useful to have bodies like these, and I’m delighted to be in a position to facilitate synergies between them. 

The Positive AI label is an important, interesting undertaking backed by credible players. I’m impatient to see how many companies will ask for it, to discuss their problems with them, and help them find solutions.

Interview with Raja Chatila, member of the Independent Committee of Experts in AI and Ethics of Positive AI

"Responsible AI must be considered throughout the value chain, from its design to its use. Like any technology, and even more particularly this one, AI must be developed with responsibility. But who is responsible?"Raja Chatila is Professor Emeritus of Artificial...

Interview with Caroline Lequesne, member of the Independent Committee of Experts in AI and Ethics of Positive AI

"Responsible AI must be thought out, designed and deployed in the service of humans. It is essential to think about human-machine collaboration and not to delegate blindly."Caroline Lequesne is an Associate Professor in Public Law at the Université Côte d’Azur and...