With the first “general-purpose AI Code of Practice” for artificial intelligence models developed under the AI Act, the European Union is advancing efforts toward influencing the direction of artificial intelligence.
A Sept. 30 statement claims that the European AI Office is leading the effort, which gathers hundreds of worldwide experts from academia, business, and civil society to cooperatively create a framework addressing important concerns including transparency, copyright, risk assessment, and internal governance.
Held virtually with about 1,000 attendees, the kick-off plenary signaled the start of a months-long process ending with the final draft in April 2025.
Applying the AI Act to general-purpose AI models such as large language models (LLMs) and AI systems integrated across several sectors is expected to depend mostly on the Code of Practice. Leading industry leaders and vice chairs, this session also unveiled four working groups that will help the Code of Practice to be developed.
These comprise eminent authorities such as German copyright law specialist Alexander Peukert and artificial intelligence researcher Nuria Oliver. These organizations will center on openness and copyright, risk identification, technical risk reduction, and internal risk control.
The European AI Office claims that these working groups will convene between October 2024 and April 2025 to write provisions, compile stakeholder comments, and hone the Code of Practice through constant consultation. Passed by the European Parliament in March 2024, the historic EU AI Act aims to control technology all throughout the bloc.
It was developed to provide an artificial intelligence governance risk-based strategy. It assigns systems various risk degrees ranging from low to unacceptable and requires particular compliance actions. Often falling in the higher-risk categories described by the legislation, the act is particularly pertinent to general-purpose AI models because of their wide uses and potential for major society impact.
Some big artificial intelligence firms, like Meta, have attacked the rules as overly constrictive, claiming they would limit creativity. The EU’s cooperative approach to developing the Code of Practice responds by trying to strike a mix between ethics and safety with encouragement of innovation. Over 430 proposals have already come from the multi-stakeholder consultation, which will assist in code-drafting influence.
With a strong focus on reducing risks and increasing society’s benefits, the EU’s aim is that by the following April, the culmination of these initiatives will set a precedent for how general-purpose artificial intelligence models can be appropriately developed, deployed, and controlled. This initiative will probably affect AI regulations globally as the global AI scene changes fast, particularly as more nations turn to the EU for direction on controlling developing technology.
You can also freely share your thoughts and comments about the topic in the comment section. Additionally, don’t forget to follow us on our Telegram, YouTube, and Twitter channels for the latest news and updates.