The kick-off Plenary for the General-Purpose AI Code of Practice took place online

Full Text Sharing

 

https://digital-strategy.ec.europa.eu/en/news/kick-plenary-general-purpo...

The European Artificial Intelligence (AI) Office kicked off the process for drawing-up of the first Code of Practice for general-purpose AI models, under the AI Act. Nearly 1,000 attendees including general-purpose AI model providers, downstream providers, industry, civil society, academia, and independent experts, took part in the online Plenary to help develop the Code of Practice. The meeting had a working character and therefore was open only to eligible stakeholders who signed up via the EU Survey by 25 August 2024.

Ahead of their publication in autumn, the AI Office also presented initial results from the multi-stakeholder consultation on the Code of Practice, which received almost 430 submissions.

The Code of Practice aims to facilitate the proper application of the AI Act's rules for general-purpose AI models, including transparency and copyright-related rules, systemic risk taxonomy, risk assessment, and mitigation measures. The Code of Practice process will involve four working groups meeting three times to discuss the drafts. This process will be led by chairs and vice-chairs. These independent experts have been selected after the call for expression of interest. The list of the chairs and the vice-chairs of the four working groups is available.

The final version of the Code of Practice will be published and presented in a closing plenary, expected in April 2025. More information about the development of the first General-Purpose AI Code of Practice is available online.

 

Meet the Chairs leading the development of the first General-Purpose AI Code of Practice


 

Chairs announced of the four Working Groups of the first General-Purpose AI Code of Practice.


European Artificial Intelligence Office logo

Hundreds of participants are set to attend the kick-off Plenary for the development of the first Code of Practice for general-purpose AI models under the AI Act.

Organised by the EU AI Office, this event marks the beginning of a collaborative effort involving general-purpose AI model providers, industry organisations, academia, and civil society, aiming to craft a robust framework. The Plenary will focus on outlining the working groups, timelines, and the expected outcomes, including the initial insights of a multi-stakeholder consultation with almost 430 submissions.

Chairs and Vice-Chairs play pivotal roles in shaping the first General-Purpose AI Code of Practice. These experts, drawn from diverse backgrounds in computer science, AI governance and law, will guide the process across four working groups. Their leadership is central to developing and refining drafts that address transparency, copyright, risk assessment, mitigation measures and internal risk management and governance of general-purpose AI providers. Selection criteria prioritised expertise, independence, and ensuring geographical diversity and gender balance. 

For example, the co-chairs of the working group on transparency and copyright bring a unique combination of expertise. One has a deep background in European copyright law, with over 25 years of experience, while the other offers extensive knowledge in AI transparency, backed by a PhD from MIT and leadership in human-centric AI research.

The diversity in the Chairs’ specialisations ensures comprehensive attention to technical, legal and governance considerations at the state of the art.

The Chairs and Vice-Chairs will synthesise input from participants and lead iterative discussions between October 2024 and April 2025, ensuring a comprehensive and effective Code of Practice. The final draft is expected to be presented in a closing plenary by April 2025.

List of Chairs & Vice-Chairs 

Working Group 1: Transparency and copyright-related rules

 

Co-Chair (Transparency): Nuria Oliver (Spain)

Nuria Oliver is the Director of the ELLIS Alicante Foundation and holds a PhD in AI from MIT. She has 25 years of research experience in human-centric AI, spanning academia, industry, and NGOs. Nuria is an independent board member of the Spanish Supervisory Agency of AI, a member of the International Expert Advisory Panel to the Scientific Report on the Safety of Advanced AI, and a Fellow of IEEE, ACM, EurAI, and ELLIS. She is also the co-founder and vice-president of ELLIS.

Co-Chair (Copyright): Alexander Peukert (Germany)

Alexander Peukert is a Professor of Civil, Commercial, and Information Law at Goethe University Frankfurt am Main. With over 25 years of experience, he is a leading expert on European and international copyright law, focusing recently on the intersection of copyright and artificial intelligence. He has been a member of the Expert Committee on Copyright of the German Association for the Protection of Intellectual Property (GRUR) since 2004 and is a founding member of the European Copyright Society, which he chaired in 2023/2024.

Vice Chair (Transparency): Rishi Bommasani (US)

Rishi Bommasani is the Society Lead at the Stanford Center for Research on Models as part of the Stanford Institute for Human-Centered AI. He researches the societal impact of general-purpose AI models, advancing the role of academia in evidence-driven policy. His work has won several scientific recognitions and been featured in the Atlantic, Euractiv, Nature, New York Times, Reuters, Science, Wall Street Journal, and Washington Post.

Vice Chair (Copyright): Céline Castets-Renard (France)

Céline Castets-Renard is Full Law Professor at the Civil Law Faculty, University of Ottawa, and Research Chair Holder Accountable AI in a Global Context. Her research focuses on the regulation and governance of digital technologies and AI from an international and comparative law perspective. She is an expert in AI law, personal data and privacy law, digital copyright law and platform regulation. She also studies the impact of technologies on human rights, equity and social justice.

Working Group 2: Risk identification and assessment, including evaluations

Chair: Matthias Samwald (Austria)

Matthias Samwald is an Associate Professor at the Institute of Artificial Intelligence at the Medical University of Vienna. His research focuses on harnessing AI to accelerate scientific research, transform medicine, and contribute to human well-being, while ensuring that these AI systems are safe and reliable in their operation.

Vice-Chair: Marta Ziosi (Italy)

Marta Ziosi is a Postdoctoral Researcher at the Oxford Martin AI Governance Initiative, where her research focuses on standards for frontier AI. During her PhD at the Oxford Internet Institute, she worked on algorithmic bias and collaborated on projects at the intersection of AI policy, fairness, and standards for large language models. She is also the founder of AI for People, a non-profit organization dedicated to ensuring that technology serves the public good.

Vice-Chair: Alexander Zacherl (Germany)

Alexander Zacherl is an independent Systems Designer. At the inception of the UK AI Safety Institute, he helped build the technical research team and the autonomous systems evaluations team. Previously, he worked at DeepMind on simulations and human interaction environments for multi-agent reinforcement learning research.

Working Group 3: Technical risk mitigation

Chair: Yoshua Bengio (Canada)

Recognised worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, “the Nobel Prize of Computing,” with Geoffrey Hinton and Yann LeCun. He is Full Professor at Université de Montréal, and the Founder and Scientific Director of Mila – Quebec AI Institute. He co-directs the CIFAR Learning in Machines & Brains program as Senior Fellow and acts as Scientific Director of IVADO.

Vice-Chair: Daniel Privitera (Italy and Germany)
Daniel Privitera is the founder and Executive Director of the KIRA Center, an independent AI policy non-profit based in Berlin. He is the Lead Writer of the International Scientific Report on the Safety of Advanced AI, which is co-written by 75 international AI experts and supported by 30 leading AI countries, the UN, and the EU.

Vice Chair: Nitarshan Rajkumar (Canada)
Nitarshan Rajkumar is a PhD candidate researching AI at the University of Cambridge. He was previously Senior Policy Adviser to the UK Secretary of State for Science, Innovation and Technology, a role in which he co-founded the AI Safety Institute. Prior to that, he was a researcher at Mila in Montréal, and a software engineer at startups in San Francisco.

Working Group 4: Internal risk management and governance of General-purpose AI providers

Chair: Marietje Schaake (Netherlands)
Marietje Schaake is a Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centred AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN's High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade, foreign- and tech policy. She is the author of The Tech Coup.

Vice Chair: Markus Anderljung (Sweden)
Markus Anderlung’s research focuses on AI regulation, responsible cutting-edge development, and compute governance, next to other topics. He is an Adjunct Fellow at the Center for a New American Security, and a member of the OECD AI Policy Observatory’s Expert Group on AI Futures. He was previously seconded to the UK Cabinet Office as a Senior Policy Specialist, GovAI’s Deputy Director, and Senior Consultant at EY Sweden.

Vice Chair: Anka Reuel (Germany)
Anka Reuel is a Computer Science Ph.D. candidate at Stanford University. Her research focuses on technical AI governance. She conducts research at the Stanford Trustworthy AI Research Lab and the Stanford Intelligent Systems Laboratory. She’s also a Geopolitics and Technology Fellow at the Belfer Center at Harvard Kennedy School.

Related content

 

Position: Co -Founder of ENGAGE,a new social venture for the promotion of volunteerism and service and Ideator of Sharing4Good