Close

Isaac Look, Head of Data Governance and Quality at Malakoff Humanis

Isaac Look : "It must be possible to tell and share the history of every AI system clearly and accurately"

Isaac Look
“In my opinion, Responsible Artificial Intelligence must include safeguards from the outset, which enable us to question the precise objectives of AI and its impact."

Why should ethical AI be implemented?

Malakoff Humanis is a provider of health and life assurance and retirement and savings plans. Our core business is therefore taking care of our customers, our employees and our partners. For us, AI is a real lever of innovation that can help us in this mission. In practical terms, it can help to quickly target customers waiting for an urgent response or identify a potentially fraudulent file.

However, for the general public, AI is scary. People don't know how it works, don't know when they are confronted with AI and how it can influence their choices and decisions. Controversies such as hiring discrimination brought about by an AI system at Amazon or the new languages invented by AI robots at Facebook have fueled these fears, and rightly so.

Internally too, despite a strong desire to use it, given its advantages, AI does raise questions. People who use it in their profession want to better understand our AI systems so that they can trust them, make informed decisions, and improve their work performance.

Therefore, as soon as we set up our first AI systems, at Malakoff Humanis we understood the need to implement Responsible Artificial Intelligence. We have established processes to assess our level of trust in our systems and raise awareness in the company about their ethical implications. We are committed to education and transparency and we give people a significant role in the design, construction and improvement phases. It must be possible to tell and share the history of every AI system clearly and accurately.

What do organizations need to implement Responsible AI?

The implementation of ethical AI in our organization has enabled us to question a number of risks, to understand them, and to better control them from the design phase onwards. These risks may be of a regulatory nature (non-compliance in the use of personal data or prohibited purposes), security-related (cybersecurity), but also ethical and human (amplification of existing biases or a lack of control over data, algorithms and their results). For example, at Malakoff Humanis, we have decided to ensure that our algorithms do not single-handedly identify suspect cases that would trigger a second opinion from a doctor. We also recommend setting up multidisciplinary teams specializing in data science, law, compliance, risk management, information systems, etc. Ethical AI involves many professions and each must contribute its own vision, sensitivity, questions, doubts and answers at every stage of AI projects.

It's essential to have and follow precisely the right information and indicators (on the quality of the data used, the profile of the data, performance indicators of the algorithm, explanation or interpretation of the results, etc.) in order to set up a trusted AI system. Once an AI application has been launched, these results must also be supervised to ensure their stability and must receive feedback from internal or external users.

Finally, data and AI governance is essential to organize processes and a comitology (normative decision-making procedure) to support and challenge AI systems. It will enable us to do the following throughout their life cycles:

  • Bring together the right expertise,
  • Share a culture of responsible data and Responsible AI,
  • Continue to raise questions concerning the ethical risks,
  • Challenge and understand the developments in AI,
  • Anticipate strategic risks for companies.

Could a label help organizations with the regulation of their AI?

A label offers a structured and simple framework for organizations to roll out. It could help companies to identify and evaluate the level of ethical risks posed by their algorithms, to properly understand and apply operational requirements at the level of both governance and AI systems. What's more, thanks to the intervention of an independent auditor, we will be able to identify areas for continuous improvement. 

Finally, this tool also enables us to bring together companies that are committed to Responsible Artificial Intelligence, and to encourage positive emulation. Together, they will be able to share new levers for better operational management of the ethical risks of the AI they put in place.