Close

Four major companies launch Positive AI, an initiative to promote Responsible Artificial Intelligence

personnes assistent conference positive ai
Introduction
As much as it promises progress and is at the very heart of major ongoing and future innovations, artificial intelligence is raising questions among the public and the authorities and demands transparency and ethics.
Body

To respond to and make progress on this subject, BCG GAMMA, L'Oréal, Malakoff Humanis, and Orange France, leading companies in their sector, announce the launch of Positive AI and its label for making AI ethics tangible and accessible for all companies.

This label, which aims to put the proposals made by public bodies into practice, is the first stage of the Positive AI initiative, which also aims to become a space for idea sharing and dialog.

Responsible AI, a key challenge for companies

Today, AI is present in all professions and industries and is no longer the preserve of disruptive innovations in cutting-edge industries. It is calling into question all economic processes and the organization of our societies. Its omnipresence raises questions for the general public, consumers, and employees who see in it an opportunity to improve their daily lives, but also the risk of losing their job or control over some of the freedoms they enjoy.

Faced with the evidence of what is at stake, the public authorities and French and European experts have launched a debate on digital ethics. Companies, which are the first instance in the development and deployment of AI, have a major role to play to ensure that AI combines its potential for innovation with respect for human rights. 

However, according to a recent BCG study, although 84% of companies say that Responsible Artificial Intelligence ought to be a priority, only 16% of them have developed a mature program due, in particular, to a lack of operational solutions and tools with which to apply them.

With this in mind, BCG GAMMA, L'Oréal, Malakoff Humanis, and Orange France combined forces and expertise to launch Positive AI in early 2022.

Positive AI launches the Responsible AI label, created by those who use it

Noting the difficulty companies have in putting the recommendations for Responsible AI into practice, the four founding companies of Positive AI are trying to do their bit by developing a reference framework that incorporates the key principles of Responsible AI defined by the European Commission. This more concrete frame of reference emphasizes three priority aspects:

  • Justice and Fairness: prevent bias in the form of discrimination and unfair treatment that may exist in both data and algorithms;
  • Transparency and Explainability: ensure that the data and algorithms at the heart of AI systems are accessible, comprehensible, explainable or, at the least, interpretable and reproducible;
  • Human intervention: make sure that organizations can supervise and correct automated decisions so that AI systems are designed for human-machine collaboration in the service of humans.

From 2023, this framework will form the basis for acquiring the Positive AI label, which is granted following an independent audit.

A global and open initiative to advance Responsible AI

Based on this first collective work, Positive AI aims to open itself up to any company that manages and/or develops artificial intelligence systems.

Positive AI will be both a resource center and a genuine community of experts in constant progress to advance ethics in AI, share experience, and make Responsible AI accessible to all managers and data experts, regardless of their maturity, industry or profession. 

Positive AI is as keen to open itself up to society as it is to deepen its expertise. It has therefore set up an independent panel of experts on AI and ethics to provide insight so that the reference framework and methods of evaluation can be enhanced. It consists of three figures known for their research on AI and its implications in society:

  • Raja Chatila, Professor Emeritus of Artificial Intelligence, Robotics and IT Ethics at Sorbonne University;
  • Caroline Lequesne Roth, Associate Professor in Public Law, Director of the Master II in Algorithmic Law and Data Governance at the Université Côte d’Azur;
  • Bertrand Braunschweig, Scientific Coordinator of the Confiance.ai program.

For Laetitia Orsini Sharps, CEO of the Positive AI Association, "Positive AI aims to create a space for working, sharing ideas, and practical and shared tools, for companies wishing to deploy Responsible AI. This practical approach is in line with the proposals made by the public authorities. Ultimately, our ambition is for this label to become a benchmark in France and in Europe, for all those who wish to develop ethical artificial intelligence that respects society. Positive AI is open to other companies of all sizes that would like to commit to this road towards Responsible AI and who share our conviction and vision for the positive use of this innovation in companies."

***

Press contact:

Mélanie Farge - +33 (0)763 134 210, [email protected]

Generic Content Layouts