Positive AI Community #2: The impact of generative AI at work

17 October 2023

On September 21, the Positive AI Community gathered around a webinar on the impact of generative AI, which brought together more than 230 employees from the association’s member companies. On this occasion, François Candelon, Senior Associate Director and World Director of the BCG Henderson Institute – BCG think tank – presented the results of the recent BCG study on the interest and limits of the use of generative AI in work.

Almost a year after the birth of ChatGPT, the adoption of this new technology in the world of work calls for in-depth reflection for business leaders.

Présentation du webinaire

GPT-4: between gain in creativity and loss of value… the line is fine

As part of a unique scientific experiment of its kind, the BCG study entitled “How to create and destroy value with generative AI”, relied on more than 750 consultants to analyze the interest of Generative artificial intelligence in a professional services environment, based on employees’ daily tasks.

“We see that AI combined with humans is extremely efficient on the creative side”, François Candelon, BCG

First lesson: when AI is used wisely, more than 90% of participants improved their performance with GPT-4 for creative ideation purposes (for example: improving the design of a product). They also converged on a 40% higher level of performance compared to those working on the same task without the tool.

But the results nevertheless describe a paradox: although ChatGPT manages to find more in-depth ideas, and therefore judged to be better, the diversity of ideas is, for its part, less important than those imagined by humans. Around 70% of participants believe that heavy use of Chat GPT-4 can stifle their creative abilities over time.

Finally, if consultants seem to be wary of AI in areas where it can bring real added value, conversely, they have tended to trust it too much in those relating to more complex tasks (for example: defining a business strategy), which is not part of the tool’s current skills. Thus, those who used GPT-4 for a task outside the scope it addresses, obtained worse results (-23%) than those who did not use the tool.

 

The need for change management within companies

According to the conclusions of the study, these initial results seem to indicate that it is more necessary than ever for companies to implement an internal change support strategy. And these decisions will not be made without the key role of business leaders. Indeed, they must be led to precisely think about the tasks that can benefit or be damaged by generative AI within their organization.

Several action levers were notably identified by the authors of the study:

Implementing a data strategy to remain competitive,
Continued testing and experimentation of AI tools as technology evolves
Implementation of an HR and management strategy to identify the new AI professions involved (such as Prompt Engineers), cultivate “human-AI” complementarity and promote learning of these tools internally.
And perhaps most importantly, leaders must continually review their decisions as the AI skill frontier advances.

To read the BCG study in its entirety

 

The Positive AI Community, a community of exchanges for members of the association

This event – like those to be organized soon – is in line with the association’s commitments, which aim to create a space for exchange and sharing of good practices. The Positive AI Community aims to become the reference community on all ethical AI subjects to progress collectively on the application of responsible artificial intelligence within organizations.

Would you like to join the Positive initiative and benefit from upcoming Positive AI Community events? Join us !

Positive AI awards its Responsible AI label to Malakoff Humanis

Malakoff Humanis has been awarded the Positive AI label, further strengthening its commitment to ethical and responsible artificial intelligence. This label, granted by the Positive AI association, recognizes the company's proactive efforts to integrate ethical...

Orange obtains the Positive AI label

For responsible artificial intelligence Orange has obtained the Positive AI label which recognizes its commitment to ethical and responsible artificial intelligence and certifies its approach to continuous progress. Responsible artificial intelligence is the...

Positive AI Community #3: advances, limits and perspectives of AI

On 22 November 2023, Positive AI organised its 3rd webinar, hosted by Bertrand Braunschweig, member of Positive AI's independent committee of AI and ethics experts and scientific coordinator of the Confiance.ai programme. On this occasion, the expert reviewed the...

Positive AI at Big Data & AI Paris 2023

On 25 and 26 September 2023, Positive AI participated in the 12th edition of the Big Data & AI Paris 2023 exhibition at the Palais des Congrès. This year, the event focused on the regulatory issues that arise as artificial intelligence evolves.  Helping...

Giskard joins Positive AI!

On December 13, 2023, Giskard, a French startup specializing in evaluating and testing the quality of generative artificial intelligence models, joined Positive AI. By joining our association, Giskard reaffirms its commitment to Responsible Artificial Intelligence.As...

Positive AI at the Acsel 2023 Legal Club – [Replay]

On 4 October, Positive AI took part in the 100% digital round table entitled "AI ACT: but how to regulate so much intelligence?", organised by the Legal Club of the Acsel numerical economy association. Philippe Streiff, General Delegate of Positive AI and Product...

Positive AI Community #1: Looking back at the webinar on Generative AI in business

On June 23, the Positive AI Community held its first event: a webinar that brought together more than a hundred employees from member companies of the association. At this event, Faycal Boujemaa, Technology Strategist and Engineer at Orange's Innovation, Data and AI...

Cédric Villani: “A label in favor of ethical AI is not the only solution, but it is part of the toolbox.”

“State regulation of artificial intelligence is essential in order to set rules, standards and bans.” Cédric Villani, mathematician, Former Deputy in the National Assembly of France and author of the report "For a meaningful artificial intelligence, towards a French...

First Positive AI breakfast with DataScientest: making learning Responsible AI accessible in business

On June 6, Positive AI and DataScientest, a data science training organization, jointly organized a breakfast to discuss the hot topic of Responsible AI and corporate training. Positive AI and DataScientest: how to develop ethical AI? This event, which brought...

Cédric Villani: “In the ecological transition, AI is part of the solution but is by no means THE solution.”

"Referring to Responsible Artificial Intelligence can be misleading. It's up to the designers of algorithmic systems to be responsible." Cédric Villani, mathematician, Former Deputy in the National Assembly of France and author of the report "For a meaningful...