Responsible Artificial Intelligence: Principles and definition


AI Ethics: what does it mean

The European Commission describes trusted AI in three points: 

  • It must be lawful and ensure compliance with applicable laws and regulations;
  • It must be ethical and embody respect for ethical principles and values, as well as adherence to these principles and values;
  • It must be technically and socially robust, to ensure that AI systems do not cause unintended harm despite good intentions. Trusted AI validates the reliability of the AI system as such, but also that of all the processes and players that are part of this system's life cycle.

What are the founding principles of Responsible Artificial Intelligence? 

Responsible Artificial Intelligence must also meet the 7 Ethics Guidelines defined by the European Commission:

  1. Human agency and oversight and fundamental rights;
  2. Technical robustness and safety;
  3. Privacy and data governance;
  4. Transparency of data, systems and models;
  5. Diversity, non-discrimination and fairness;
  6. Societal and environmental well-being;
  7. Accountability.

On the road to defining and regulating artificial intelligence

Faced with growing interest in artificial intelligence within society and organizations, public authorities have recognized the need to define and regulate artificial intelligence and its use.

In France:

  • 2018: The Villani report, "Making sense of AI", proposes a "body that issues opinions" on artificial intelligence, on what is acceptable and what is not.
  • 2020: The report by the Institut Montaigne, "Algorithms: Please mind the bias!", supports "the emergence of labels to strengthen the confidence of citizens in critical uses, and accelerate the dissemination of beneficial algorithms".
  • 2022: The study by the Conseil d'État, "Turning to artificial intelligence for better public service", recommends building "trustworthy public AI".

And elsewhere in Europe:

  • 2019: The European Commission publishes "7 Ethics Guidelines for Trustworthy AI".
  • 2021: The AI Act lays the foundations for an initial legal framework.
  • 2024: The new regulation on artificial intelligence systems is to be adopted.

These discussions will lead to a new regulation that will affect all companies that manage or develop AI systems. Organizations therefore have an interest in committing now to making their AI accountable. This is especially the case since a company that uses AI irresponsibly can also lose its customers' trust and jeopardize its reputation. 

To find out more about AI ethics.

FAQs on Responsible AI

Why should we make AI more responsible? 

AI refers to a set of software systems that enable a certain objective to be achieved. We talk of a lack of accountability or responsibility when algorithms can be misleading, unfair or biased when they interact with humans. To prevent any risk of misuse and so-called irresponsible use from an ethical and moral standpoint, control processes must be established throughout the value chain, from the design phase all the way to the deployment of the AI system. 

What are the key principles of Responsible Artificial Intelligence that Positive AI adheres to?

The Positive AI initiative has chosen to build an auditable reference framework around the key principles of Responsible AI defined by the European Commission. All facets of the EU's High-Level Expert Group are covered, and Positive AI has selected three priority areas with an overarching impact on individual freedoms: 

  • Justice and Fairness (5 HLEG): Prevent bias in the form of discrimination and unfair treatment that may exist in both data and algorithms;
  • Transparency and Explainability (4): Ensure that the data and algorithms at the heart of AI systems are accessible, comprehensible, explainable or, at the least, interpretable and reproducible;
  • Human intervention (1): Make sure that organizations can supervise and correct automated decisions so that AI systems are designed for human-machine collaboration in the service of humans.

This reference framework is intended to evolve and adapt as the debate on AI ethics progresses, and then to allow the gradual integration of other Responsible AI Guidelines defined by the European Commission, such as the environmental question, which grows more important every day.  

How can we ensure the responsible use of AI? 

Several actions must be taken to ensure that AI is used responsibly: 

  • Ensure that the objectives and results of AI systems are fair, unbiased, and explainable.
  • Secure AI systems so that they are safe and robust.
  • Follow best practices in data governance to safeguard the privacy of users.
  • Minimize negative social and environmental impacts
  • Make AI augment rather than replace human capabilities

Find out more about the Positive AI initiative

background newsletter