Generative AI, exploded into the public consciousness in late 2022, and a year from now, it could become one of the most heavily regulated technologies in its industry.
Around the world, from the United States and the European Union to Brazil, policymakers are urgently deliberating, and in the case of China, implementing measures to control artificial intelligence and curtail some of its more concerning applications.
The European Union is poised to take the lead in implementing oversight and regulations for generative AI. The European Commission, representing approximately two dozen countries, is in the final stages of negotiations for the AI Act, which is being hailed as “the world’s first set of regulations on AI.”
It is expected that a final version of the AI Act will be agreed upon this year, with the intention of it coming into effect by late 2025.
Some AI Tools Could Be Banned in Europe
The proposal for this legislation was initially introduced in 2021, preceding the release of OpenAI generative-AI tools, ChatGPT and DALL-E. This subsequently led major companies like Meta, Google, and Microsoft to become prominent actors and advocates for generative AI.
The EU’s draft regulation has been revised this year. The primary objective of the act is to classify AI models and tools as either “high risk” or “unacceptable.”
AI falling into the high-risk category encompasses tools used for biometric identification, education, workforce management, the legal system, and law enforcement, as well as any tool necessitating assessment and approval before deployment.
AI tools and applications categorized as “unacceptable” would be prohibited within the European Union under this legislation.
This includes technologies like “remote biometric identification systems” (such as facial recognition), “social scoring” (which involves classifying individuals based on economic status and personal characteristics), and “cognitive behavioral manipulation,” such as voice-activated AI-driven toys.
Regarding generative AI, according to the European Union’s proposed regulations, it would be obligatory to disclose the fact that content has been artificially generated. Additionally, companies would be required to disclose the data sources used in training any extensive language model.
In response to escalating scrutiny and legal challenges from content creators, particularly authors, who have seen their work scraped from the internet and utilized for extensive training datasets, AI tools and large language model (LLM) providers have refrained from specifying the origins of their training data.
However, under this legislation, companies would be compelled to demonstrate their efforts in mitigating legal risks prior to releasing their tools and models. Furthermore, they would be obliged to register all fundamental models in a database maintained by the European Commission.
The US approach
The United States is lagging behind the European Union in terms of AI regulation. In the preceding month, the White House announced its intention to create an executive order on the technology, highlighting the pursuit of “bipartisan regulation”. While the White House has actively sought counsel from industry experts, the Senate has held a single hearing and a closed-door “AI forum” with leaders from prominent tech companies.
However, neither of these events resulted in significant action, even though Mark Zuckerberg was confronted during the forum regarding the fact that Meta’s Llama 2 model provided a detailed guide for creating anthrax. Nevertheless, American lawmakers maintain their commitment to implementing some form of AI regulation.
Senator Richard Blumenthal asserted during the hearing, “It’s important to be clear, there will be regulatory measures put in place.”
Furthermore, alterations to US copyright law may be on the horizon. In August, the Copyright Office disclosed its contemplation of taking action or establishing federal regulations pertaining to generative AI due to the “extensive public discourse concerning the potential impact of these systems on the creative industries.”
The Copyright Office initiated a public comment period that extended until early November, garnering over 7,500 submissions.
European Public Calls
A majority of Europeans want government restrictions on artificial intelligence to mitigate the impacts of the technology on job security, according to a major new study from Spain’s IE University.
The study shows that out of a sample of 3,000 Europeans, 68% want their governments to introduce rules to safeguard jobs from the rising level of automation being brought about by AI.
That number is up 18% from the amount of people who responded in the same way to a similar piece of research that IE University brought out in 2022. Last year, 58% of people responded to IE University’s study saying they think that AI should be regulated.
“The most common fear is the potential for job loss,” Ikhlaq Sidhu, dean of the IE School of SciTech at IE University
What’s Happening in the UK
The United Kingdom wants to achieve the status of an “AI superpower,” a March paper from the Department for Science, Innovation and Technology paper from March.
While the government department has established a “regulatory sandbox for AI,” the UK presently has no immediate plans to introduce any legislation for AI oversight. Instead, the intention is to evaluate and monitor the progress of AI.
Michelle Donelan, the Secretary of State for Science, Innovation, and Technology, said that “hastening the introduction of regulations too soon could impose excessive burdens on businesses.” She further noted, “As technology continues to evolve, our regulatory approach may also need to adapt accordingly.”
Brazil and China
In a recent revision of draft legislation earlier this year, Brazil appeared to adopt a strategy similar to the European Union by classifying AI tools and applications as either “high” or “excessive” risk and intending to prohibit those falling into the latter category.
The proposed law, as assessed by the tech advisory firm Access Partnership, places a strong importance on human rights and establishes a rigorous liability framework. Under this legislation, Brazil would hold creators of a Large Language Model (LLM) responsible for any harm caused by AI systems categorized as high risk.
Nonetheless, China stands out as one of the few countries that has implemented new regulations on AI. Despite the widespread use of technologies like facial recognition for government surveillance, China has introduced regulations in the past two years concerning recommendation algorithms, a fundamental AI application.
The subsequent year saw the introduction of additional regulations targeting “deep synthesis” technology, commonly referred to as “deep fakes.” Now, China is moving towards regulating generative AI. Notably, one of the proposed rules in draft legislation would require that any Large Language Model (LLM) and its training data must be “true and accurate.”
This single requirement could effectively limit the presence of consumer-level generative AI in China. A report from the Carnegie Endowment for International Peace, a nonpartisan think tank, on China’s rules said it was “a potentially insurmountable hurdle for AI chatbots to clear.”
The global Impact for AI regulation is rapidly increasing. The European Union is at the forefront, with the AI Act set to introduce groundbreaking regulations. In the United States, the regulatory process is still in its early stages.
Meanwhile, Brazil and China are also enacting measures to regulate AI, with a focus on high-risk applications. In the UK, a cautious approach is being taken to assess AI’s progression. Across Europe, there is a growing demand for AI regulations to safeguard job security. The AI industry is witnessing significant changes, with potential implications for AI developers and users worldwide.