Bertrand Braunschweig, Scientific Coordinator of the Confiance.ai program at the SystemX Institute for Technological Research and member of Positive AI's panel of experts
Bertrand Braunschweig is the Scientific Coordinator of the Confiance.ai program at the SystemX Institute for Technological Research
As a researcher in system dynamics, I started working on artificial intelligence in 1987. I managed AI research activities at IFP Energies Nouvelles, chaired the French Association for Artificial Intelligence, and then joined the French National Research Agency (ANR) in 2006 as a program manager, then became Head of the Department of Communication and Information Science and Technology (STIC). In particular, I coordinated the publication of two books on AI: the ANR's booklet on AI in 2012 and the white paper on artificial intelligence by the French National Institute for Research in Digital Science and Technology, which was initially published in 2016 followed by a second version in 2021.
Since 2015, I have believed that trust is key to the success AI. Even though, with some of its applications, such as the recommendation of tailor-made content (films, music, advertisements, etc.), an error is not necessarily serious, it can be if there is a human, economic or environmental issue at stake: driving an autonomous vehicle, supervising a nuclear power plant, detecting skin cancer, etc. In cases like these, the stability, robustness, interpretability, explainability and certification of AI systems are essential for minimizing errors and their impact. The European Commission talks about "high-risk systems", in which it includes numerous applications for people (employment, justice, access to services, etc.), but also critical industrial infrastructures.
Trust is a rich and multifaceted issue. Trusted AI is AI that meets technological, human and societal requirements. It is created by combining several factors:
My mission as scientific coordinator is to serve the Confiance.ai program, which is developing technologies to increase trust in AI in critical systems (aeronautics, transport, energy, defense, security, etc.). To date, we have developed, tested and approved around a hundred components and pieces of software to meet the need for trust among industrial players in specific use cases such as weld verification, product consumption forecasts, and predictive maintenance in aviation. Over and above the forthcoming regulations, the introduction of trusted AI systems represents a significant productivity gain for them over the entire industrial cycle.
Yes, we will get there. Nevertheless, trusted AI is a very long-term project.
To take a well-known example, we are still marked by the death of a pedestrian in the United States in 2018, who was run over by a self-driving car. As César A. Hidalgo explains in his book, "How Humans Judge Machines", we don't expect the same things from humans as we do from machines – we do not accept errors by the latter. Despite the benefits that AI could bring for road safety (potentially reducing the number of deaths per year on French roads to one tenth), we must proceed with caution given the dramatic human consequences of certain types of AI, which have a lasting detrimental effect on the image of the associated innovations.
I am convinced that it's by investing in research that we will find solutions (software, algorithmic, hardware, hybrid, etc.) that will allow us to reduce the error rate of AI, and therefore limit the associated risks.
The definition of standards also helps with this. One example of this is the AI Act, in which I am helping to draft harmonized standards. The particular challenge of this regulation, which is currently under discussion at the European Parliament, is to establish requirements that will help all stakeholders (developers, users, experts, etc.) to benefit from trustworthy AI.
Given my experience and investment in the subject of trusted AI for many years, I naturally agreed to join this panel at Positive AI. I'm contributing all the knowledge I've acquired, particularly in the Confiance.ai program, and I'm also sharing the progress and discussions I'm taking part in at European level to help Positive AI finalize its reference framework. I think it's useful to have bodies like these, and I'm delighted to be in a position to facilitate synergies between them.
The Positive AI label is an important, interesting undertaking backed by credible players. I'm impatient to see how many companies will ask for it, to discuss their problems with them, and help them find solutions.