After intense discussions this week, politicians in Brussels have now reached a “provisional agreement” on the European Union’s proposed Artificial Intelligence Act (AI Act).
This act is expected to be the first comprehensive set of rules globally for governing AI. It could become a standard for other places trying to create similar laws.
Important Rules and Accountability Measures in the EU’s AI Act
As mentioned in the official statement, negotiators set out requirements for powerful general-purpose AI systems that have a significant impact. These requirements include things like risk assessments, adversarial testing, incident reports, and more.
The agreement also insists on transparency from these systems, which involves creating technical documents and detailed summaries about the data used for training. This is something that companies like OpenAI, the creator of ChatGPT, have not done so far.
Another aspect is that people should be able to file complaints about AI systems and get explanations about decisions made by “high-risk” systems that affect their rights.
The press release didn’t provide specific information on how this process would function or the exact standards involved. However, it did mention a structure for imposing fines on companies that violate the regulations.
The fines differ depending on the nature of the violation and the size of the company, ranging from 35 million euros or 7 percent of global revenue to 7.5 million euros or 1.5 percent of global revenue or turnover.
Overview of AI Regulations in the EU
Several applications of AI are prohibited, such as gathering facial images from CCTV footage, sorting based on “sensitive characteristics” like race, sexual orientation, religion, or political beliefs, emotion recognition in workplaces or schools, and the development of “social scoring” systems.
The last two restricted points involve AI systems that “influence human behavior to override their free will” or “take advantage of people’s vulnerabilities.” The regulations also outline safeguards and exceptions for law enforcement’s use of biometric systems, whether in real-time or for searching evidence in recordings.
A final agreement is expected to be reached by the end of the year. However, the law is unlikely to take effect until at least 2025.
The initial version of the EU’s AI Act was revealed in 2021 with the aim of defining what qualifies as AI and harmonizing regulations for AI technology among EU member states. However, this initial draft did not account for rapidly evolving generative AI tools like ChatGPT and Stable Diffusion, leading to multiple revisions of the legislation.
Although a provisional agreement has been reached, additional negotiations are necessary. This includes voting by Parliament’s Internal Market and Civil Liberties committees.
Discussions about the regulations governing real-time biometrics monitoring (like facial recognition) and “general-purpose” foundational AI models such as OpenAI’s ChatGPT have been extremely contentious. These discussions were reportedly ongoing this week before the announcement on Friday, causing a delay in the press conference where the agreement was revealed.
While EU lawmakers have advocated for a complete ban on AI in biometric surveillance, governments have sought exceptions for military, law enforcement, and national security purposes. Recent proposals from France, Germany, and Italy, suggesting that creators of generative AI models should self-regulate, are also thought to have contributed to the delays.
After extensive discussions, Brussels lawmakers have tentatively agreed on the EU’s groundbreaking Artificial Intelligence Act. This comprehensive set of global rules for AI governance is poised to set a standard for similar legislation worldwide. Key provisions address transparency, accountability, and prohibitions, with a final agreement expected by year-end, although implementation may not occur until 2025. Ongoing negotiations and committee votes underscore the complexity of shaping AI regulations.