Positive AI Community #3: advances, limits and perspectives of AI

19 December 2023
On 22 November 2023, Positive AI organised its 3rd webinar, hosted by Bertrand Braunschweig, member of Positive AI’s independent committee of AI and ethics experts and scientific coordinator of the Confiance.ai programme. On this occasion, the expert reviewed the latest advances, limits and prospects for artificial intelligence.

AI is making remarkable progress

Over the last few decades, artificial intelligence has undergone a remarkable expansion in many fields of application: 

  • image recognition 
  • speech processing (Siri, Alexa, Sounhound) 
  • Natural language (translation, synthesis, summaries, Q&A, etc.)
  • Games (draughts, chess, bricks, go, poker, starcraft, etc.) 
  • Decision support (banking, finance, health, etc.) 
  • Recommendations, personalised advertising, etc.
  • Science (AlphaFold, astrophysics, etc.) 

However, with the advent of generative AI, flaws are appearing as progress accelerates. Bertrand Braunschweig sums up the “five walls of AI” – the factors on which AI has an impact – including trust, energy, security, human-machine interaction and inhumanity. In addition, large language models (LLMs) that rely heavily on deep learning, such as ChatGPT, can be a source of errors and generate :

  • Toxicity 
  • Stereotype bias, fairness 
  • Lack of robustness against adversarial attacks / OOD 
  • Failure to respect privacy 
  • Ethical problems 
  • Hallucinations (inventions) 
  • Non-alignment with user decisions

Towards trusted AI

The draft European regulation on artificial intelligence, currently being negotiated, proposes that AI systems should be analysed and classified according to the risk they pose to users.

“Trust is the main issue in European regulation”, Bertrand Braunschweig points out.

The principles laid down by the European Union aim to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. In addition, for an AI system to be considered responsible and trusted, these three dimensions need to be taken into account, the expert points out:

The technological dimension, encompassing the robustness and reliability of the systems. Interactions with individuals, including aspects such as explicability, monitoring, responsibility and transparency. The social dimension, addressing issues of privacy and fairness.

The European Parliament defines artificial intelligence systems as “computer systems that are designed to operate with different levels of autonomy and that can, for explicit or implicit purposes, generate results such as predictions, recommendations or decisions that influence physical or virtual environments.

Carbon emission rates generated by ChatGPT3

After outlining the technological advances and liability issues surrounding AI, Bertrand Braunschweig examined the current and future challenges facing artificial intelligence. These challenges include respect for personal data, the responsibility of AI, and the environmental and energy impact associated with the development of these tools. On the last point, it is estimated that artificial intelligence systems are significant emitters of carbon dioxide. By way of example, GPT-3 generated around 500 tonnes of CO2 emissions in 2022. These issues underline the crucial importance of providing appropriate solutions.

The Positive AI Community, a forum for exchange between members of the association

This event – like those to be held in the near future – is in line with the association’s commitment to creating a forum for exchanging and sharing best practice. The aim of the Positive AI Community is to become the community for all ethical AI issues, so that we can make collective progress on the application of responsible artificial intelligence within organisations. 

Would you like to join the Positive AI initiative and benefit from upcoming Positive AI Community events? Join us!

Orange obtains the Positive AI label

For responsible artificial intelligence Orange has obtained the Positive AI label which recognizes its commitment to ethical and responsible artificial intelligence and certifies its approach to continuous progress. Responsible artificial intelligence is the...

Positive AI at Big Data & AI Paris 2023

On 25 and 26 September 2023, Positive AI participated in the 12th edition of the Big Data & AI Paris 2023 exhibition at the Palais des Congrès. This year, the event focused on the regulatory issues that arise as artificial intelligence evolves.  Helping...

Giskard joins Positive AI!

On December 13, 2023, Giskard, a French startup specializing in evaluating and testing the quality of generative artificial intelligence models, joined Positive AI. By joining our association, Giskard reaffirms its commitment to Responsible Artificial Intelligence.As...

Positive AI at the Acsel 2023 Legal Club – [Replay]

On 4 October, Positive AI took part in the 100% digital round table entitled "AI ACT: but how to regulate so much intelligence?", organised by the Legal Club of the Acsel numerical economy association. Philippe Streiff, General Delegate of Positive AI and Product...

Positive AI Community #2: The impact of generative AI at work

On September 21, the Positive AI Community gathered around a webinar on the impact of generative AI, which brought together more than 230 employees from the association's member companies. On this occasion, François Candelon, Senior Associate Director and World...

Positive AI Community #1: Looking back at the webinar on Generative AI in business

On June 23, the Positive AI Community held its first event: a webinar that brought together more than a hundred employees from member companies of the association. At this event, Faycal Boujemaa, Technology Strategist and Engineer at Orange's Innovation, Data and AI...

Cédric Villani: “A label in favor of ethical AI is not the only solution, but it is part of the toolbox.”

“State regulation of artificial intelligence is essential in order to set rules, standards and bans.” Cédric Villani, mathematician, Former Deputy in the National Assembly of France and author of the report "For a meaningful artificial intelligence, towards a French...

First Positive AI breakfast with DataScientest: making learning Responsible AI accessible in business

On June 6, Positive AI and DataScientest, a data science training organization, jointly organized a breakfast to discuss the hot topic of Responsible AI and corporate training. Positive AI and DataScientest: how to develop ethical AI? This event, which brought...

Cédric Villani: “In the ecological transition, AI is part of the solution but is by no means THE solution.”

"Referring to Responsible Artificial Intelligence can be misleading. It's up to the designers of algorithmic systems to be responsible." Cédric Villani, mathematician, Former Deputy in the National Assembly of France and author of the report "For a meaningful...