
In order to assess an organization's maturity as regards its use of Responsible Artificial Intelligence, the technical reference framework of the Positive AI label analyzes:
It was developed on the basis of the ethical principles European Commission: of Responsible AI defined by the European Commission. Below are all the aspects covered, including three priority areas, which constitute a coherent whole around individual freedoms:
The Positive AI label consists of a technical reference framework and operational tools that will provide organizations with practical tools to both exploit artificial intelligence as a lever for innovation and control the associated risks.
Depending on your organization's level of maturity, you will have access to one of the label's three levels. Through discussions with the Positive AI community and the tools made available to you, you will be able to make progress on Responsible AI and then qualify for higher-level AI labeling.
The label was created by data scientists from the founding companies of the Positive AI initiative – BCG GAMMA, l’Oréal, Malakoff Humanis, and Orange France – to be as pragmatic and practical as possible and therefore enable any company to improve the level of trust in its artificial intelligence systems. The AI labeling initiative was then assisted by consulting firm EY, an audit specialist, in the construction of an auditable reference framework. Finally, this document was submitted to Positive AI's independent panel of experts on artificial intelligence and ethics for its opinions and observations.
Do you want to benefit from the AI labeling service? Contact us to find out the opening date for applications (during 2023 – date to come).
Artificial intelligence is now present in all professions and industries and is no longer the preserve of disruptive innovations in cutting-edge industries. On the contrary, current events show us its growing democratization. Its omnipresence raises questions about all the organizational and economic processes of organizations and also for the general public, consumers, and employees who see in it an opportunity to improve their daily lives, but also the risk of losing their job or control over some of their freedoms.
Faced with the evidence of what is at stake, the public authorities and French and European experts have launched a debate on digital ethics. Companies, which are the first instance in the development and deployment of AI, have a major role to play to ensure that AI combines its potential for innovation with respect for human rights. However, according to a recent BCG study, although 84% of companies say that Responsible Artificial Intelligence ought to be a priority, only 16% of them have developed a mature program due, in particular, to a lack of operational solutions and tools with which to apply them.
With this in mind, BCG GAMMA, L'Oréal, Malakoff Humanis, and Orange France launched Positive AI in early 2022. One of the initiative's mission statements is to supply organizations with the tools to establish lasting Responsible Artificial Intelligence.
The label :
To encourage all players to work towards the implementation of Responsible AI, including those with less maturity in this area, the label has been designed with three successive levels.