Close

Olivier Kahn, Data Expert at Positive AI

Oliver Kahn: "Today, we need to accelerate to ensure that all AI systems are developed responsibly."

photo olivier kahn
"For me, Responsible Artificial Intelligence is AI whose stakeholders (developers, business teams, end consumers, employees, etc.) are aware of the impact of their AI system on society and the environment, and act to reduce this impact as much as possible through actions to control and measure risks."

Why should we put Responsible Artificial Intelligence into practice?

AI systems have a prominent place in society. Historically, they were first developed by laboratories and research and development centers then, rapidly, by companies aiming to improve their products and services. 

The technologies have evolved very quickly, and so have their uses, taking little or no account of AI's impact on society. Therefore, today we need to accelerate to ensure that all these AI systems are developed responsibly. Because if it isn't implemented ethically, AI carries high risks for people and the planet in several respects: justice, fairness, environment, health, respect for privacy, etc. It also affects companies. If they don't encourage ethical AI, they risk harming their reputation and their performance.

What do companies and data experts need today?

The first step on the road towards Responsible Artificial Intelligence is the establishment of governance and tools to help organizations and their teams to analyze and reduce the risks. To support them, it's essential to have a framework that prescribes the actions to be taken and is also progressive, so that it encourages all players, including those that are less mature. Indeed, companies have unequal levels of maturity. Only 52% of businesses with sales figures in excess of 100 million US dollars have implemented a trusted AI program, and 80% of these programs have limited means to achieve their goals (BCG study). The road is still long…

As for data experts, they have a particular need for practical tools that make it easier to verify certain criteria: bias analysis, greenhouse gas emissions linked to an artificial intelligence algorithm, etc. These are under development, but so far there is no central space in which all these tools are referenced, documented and updated.

Finally, education on the resources and energy needed in order to move forward with an ethical AI approach would help to bring the least advanced players on board.

Could a label help organizations regulate their AI?

A label provides a precise, transparent and auditable framework, which is invaluable for providing guidance on the subject to all kinds of companies, whatever their level of maturity. Moreover, this tool constitutes a lever for internal and external communication, so that people can be educated on the actions carried out and the company's overall approach towards trusted AI.