On June 6, Positive AI and DataScientest, a data science training organization, jointly organized a breakfast to discuss the hot topic of Responsible AI and corporate training.
Positive AI and DataScientest: how to develop ethical AI?
This event, which brought together data professionals and managers, was an opportunity to respond to participants’ issues and questions about implementing ethical AI in their company.
Training and skills development, an issue at the heart of companies’ concerns
The rapid and growing development of new technologies such as generative AI is now impacting all aspects of the economy, and many members of the public, employees and, in particular, data scientists, are demanding more ethics and transparency in everyday practice. In response, Positive AI and DataScientest have pooled their expertise to offer training to companies wishing to progress, acquire the skills necessary for the responsible use of AI, and anticipate future regulations. Indeed, seeing as the European AI Act regulations are to take effect by 2024, it is essential that organizations ask themselves the right questions now, and ensure the responsibility of their artificial intelligence and its uses.
Three levels of ethical AI training offered by Positive AI and DataScientest
To raise awareness among the various stakeholders and meet their specific needs, three types of training will be offered to companies:
- “Ethical leader”, a first training course for senior executives (1 to 2 hours intra-company and face-to-face), who wish to acquire a strategic vision of Responsible AI, deepen their knowledge of key AI models and current and future regulations (AI Act), so that they hold all the keys for raising employees’ awareness of AI ethics and share a common vision of the company on these issues.
- “Fundamentals”, a second training course for employees working with data on a daily basis (a half-day of intra- or inter-company training with face-to-face and remote sessions), to enable them to better understand the notions of ethics and responsibility and to support them in the use and implementation of Responsible AI using case studies.
- “Responsible coder”, a third training course for data scientists (5-day program combining inter-company, face-to-face and remote sessions), which aims to develop in-depth the technical skills required to put Responsible AI into practice through the various key principles: protection of privacy, bias, ethics, fairness, transparency, explainability, governance, etc.
Each module will be illustrated by use cases and practical examples in companies to promote understanding of the risks that uncontrolled AI could generate. A common-core syllabus will also be offered within these three training courses, to enable different groups of participants to master the basics of AI as well as its ethical and legal issues.
Are you interested in one or more of these courses? Contact us to arrange a chat with the Positive AI teams.
Positive AI breakfasts: a collaborative format for sharing information on trusted AI
This event was also an opportunity to reiterate the commitments of the Positive AI initiative, which aims to open up the association and create a space for information sharing and mutual assistance on the question of trusted AI. These breakfasts, which are intended to continue over time, are aimed at all companies wishing to get involved in shaping Responsible AI, and also professionals who are linked either directly or indirectly to AI and data projects within their organization.
Through these events, the aim is to reach out to managers, employees and data scientists and provide them with initial responses to the specific needs of their organization and/or business (CSR, security and compliance, data, etc.) around ethical AI.
These breakfasts are also an opportunity to share informative content on the Positive AI initiative, its role, its mission, and the services offered (reference framework, label, etc.) for putting trusted AI into practice in companies.