Find out more about AI labeling

Femme qui écrit sur son ordinateur
The Positive AI label aims to support all organizations that manage and/or develop artificial intelligence systems and that wish to voluntarily commit to an ethical AI approach.  This label is aimed at the creators of AI systems as well as the companies that use them, whatever their size, level of maturity and area of activity.

The aspects assessed

In order to assess an organization's maturity as regards its use of Responsible Artificial Intelligence, the technical reference framework of the Positive AI label analyzes:

  • its governance, 
  • the management of its AI systems,
  • the algorithms that pose the greatest moral risks and the methods used to design them.

It was developed on the basis of the ethical principles European Commission: of Responsible AI defined by the European Commission. Below are all the aspects covered, including three priority areas, which constitute a coherent whole around individual freedoms:

    Justice and Fairness

    Prevent bias in the form of discrimination and unfair treatment that may exist in both data and algorithms.

    Transparency and Explainability

    Ensure that the data and algorithms at the heart of AI systems are accessible, comprehensible, explainable or, at the least, interpretable and reproducible.

    Human intervention

    Make sure that organizations can supervise and correct automated decisions so that AI systems are designed for human-machine collaboration in the service of humans.

    Tools at your disposal

    The Positive AI label consists of a technical reference framework and operational tools that will provide organizations with practical tools to both exploit artificial intelligence as a lever for innovation and control the associated risks.

    Depending on your organization's level of maturity, you will have access to one of the label's three levels. Through discussions with the Positive AI community and the tools made available to you, you will be able to make progress on Responsible AI and then qualify for higher-level AI labeling.


    Who created the label?

    The label was created by data scientists from the founding companies of the Positive AI initiative – BCG GAMMA, l’Oréal, Malakoff Humanis, and Orange France – to be as pragmatic and practical as possible and therefore enable any company to improve the level of trust in its artificial intelligence systems. The AI labeling initiative was then assisted by consulting firm EY, an audit specialist, in the construction of an auditable reference framework. Finally, this document was submitted to Positive AI's independent panel of experts on artificial intelligence and ethics for its opinions and observations. 


    Apply for the Positive AI label

    Do you want to benefit from the AI labeling service? Contact us to find out the opening date for applications (during 2023 – date to come).


    The label's key principles

    • A technical reference framework designed by the data scientists of the founding companies, supported by consulting firm EY, an audit specialist
    • This document was submitted to Positive AI's independent panel of experts on artificial intelligence and ethics
    • Operational tools made available to all
    • Three successive levels of certification



    1) Why has a label been created?

    Artificial intelligence is now present in all professions and industries and is no longer the preserve of disruptive innovations in cutting-edge industries. On the contrary, current events show us its growing democratization. Its omnipresence raises questions about all the organizational and economic processes of organizations and also for the general public, consumers, and employees who see in it an opportunity to improve their daily lives, but also the risk of losing their job or control over some of their freedoms.

    Faced with the evidence of what is at stake, the public authorities and French and European experts have launched a debate on digital ethics. Companies, which are the first instance in the development and deployment of AI, have a major role to play to ensure that AI combines its potential for innovation with respect for human rights. However, according to a recent BCG study, although 84% of companies say that Responsible Artificial Intelligence ought to be a priority, only 16% of them have developed a mature program due, in particular, to a lack of operational solutions and tools with which to apply them.

    With this in mind, BCG GAMMA, L'Oréal, Malakoff Humanis, and Orange France launched Positive AI in early 2022. One of the initiative's mission statements is to supply organizations with the tools to establish lasting Responsible Artificial Intelligence. 

    2) What are the label's three key points?

    The label :

    • is open to all organizations that manage and/or develop artificial intelligence systems, regardless of their size, level of maturity, and area of activity, and that wish to voluntarily commit to the implementation of trusted artificial intelligence
    • consists of a technical framework and practical tools designed by data scientists from the founding companies, with the support of consulting firm EY, an audit specialist
    • has three successive levels of certification

    3) Is there a minimum level of maturity in ethical AI that my company must have in order to obtain the label?

    To encourage all players to work towards the implementation of Responsible AI, including those with less maturity in this area, the label has been designed with three successive levels.

Find out more about the Positive AI initiative

background newsletter