Europe鈥檚 first AI rules: Could they set a global standard?
Loading...
| London
The breathtaking development of artificial intelligence has dazzled users by composing music, creating images, and writing essays, while also raising fears about its implications. Even European Union officials working on groundbreaking rules to govern the emerging technology were caught off guard by AI鈥檚 rapid rise.
The 27-nation bloc proposed the Western world鈥檚 first AI rules two years ago, focusing on reining in risky but narrowly focused applications. General purpose AI systems like chatbots were barely mentioned. Lawmakers working on the AI Act considered whether to include them but weren鈥檛 sure how, or even if it was necessary.
鈥淭hen ChatGPT kind of boom, exploded,鈥 said Dragos Tudorache, a Romanian member of the European Parliament co-leading the measure. 鈥淚f there was still some that doubted as to whether we need something at all, I think the doubt was quickly vanished.鈥
The release of ChatGPT last year captured the world鈥檚 attention because of its ability to generate human-like responses based on what it has learned from scanning vast amounts of online materials. With concerns emerging, European lawmakers moved swiftly in recent weeks to add language on general AI systems as they put the finishing touches on the legislation.
The EU鈥檚 AI Act could become the de facto global standard for artificial intelligence, with companies and organizations potentially deciding that the sheer size of the bloc鈥檚 single market would make it easier to comply than develop different products for different regions.
鈥淓urope is the first regional bloc to significantly attempt to regulate AI, which is a huge challenge considering the wide range of systems that the broad term 鈥楢I鈥 can cover,鈥 said Sarah Chander, senior policy adviser at digital rights group EDRi.
Authorities worldwide are scrambling to figure out how to control the rapidly evolving technology to ensure that it improves people鈥檚 lives without threatening their rights or safety. Regulators are concerned about new ethical and societal risks posed by ChatGPT and other general purpose AI systems, which could transform daily life, from jobs and education to copyright and privacy.
The White House recently brought in the heads of tech companies working on AI including Microsoft, Google, and ChatGPT creator OpenAI to discuss the risks, while the Federal Trade Commission has warned that it wouldn鈥檛 hesitate to crack down.
China has issued draft regulations mandating security assessments for any products using generative AI systems like ChatGPT. Britain鈥檚 competition watchdog has opened a review of the AI market, while Italy briefly banned ChatGPT over a privacy breach.
The EU鈥檚 sweeping regulations 鈥 covering any provider of AI services or products 鈥 are expected to be approved by a European Parliament committee Thursday, then head into negotiations between the 27 member countries, Parliament, and the EU鈥檚 executive Commission.
European rules influencing the rest of the world 鈥 the so-called Brussels effect 鈥 previously played out after the EU tightened data privacy and mandated common phone-charging cables, though such efforts have been criticized for stifling innovation.
Attitudes could be different this time. Tech leaders including Elon Musk and Apple co-founder Steve Wozniak have called for a six-month pause to consider the risks.
Geoffrey Hinton, a computer scientist known as the 鈥淕odfather of AI,鈥 and fellow AI pioneer Yoshua Bengio voiced their concerns last week about unchecked AI development.
Mr. Tudorache said such warnings show the EU鈥檚 move to start drawing up AI rules in 2021 was 鈥渢he right call.鈥
Google, which responded to ChatGPT with its own Bard chatbot and is rolling out AI tools, declined to comment. The company has told the EU that 鈥淎I is too important not to regulate.鈥
Microsoft, a backer of OpenAI, did not respond to a request for comment. It has welcomed the EU effort as an important step 鈥渢oward making trustworthy AI the norm in Europe and around the world.鈥
Mira Murati, chief technology officer at OpenAI, said in an interview last month that she believed governments should be involved in regulating AI technology.
But asked if some of OpenAI鈥檚 tools should be classified as posing a higher risk, in the context of proposed European rules, she said it鈥檚 鈥渧ery nuanced.鈥
鈥淚t kind of depends where you apply the technology,鈥 she said, citing as an example a 鈥渧ery high-risk medical use case or legal use case鈥 versus an accounting or advertising application.
OpenAI CEO Sam Altman plans stops in Brussels and other European cities this month in a world tour to talk about the technology with users and developers.
Recently added provisions to the EU鈥檚 AI Act would require 鈥渇oundation鈥 AI models to disclose copyright material used to train the systems, according to a recent partial draft of the legislation obtained by The Associated Press.
Foundation models, also known as large language models, are a subcategory of general purpose AI that includes systems like ChatGPT. Their algorithms are trained on vast pools of online information, like blog posts, digital books, scientific articles, and pop songs.
鈥淵ou have to make a significant effort to document the copyrighted material that you use in the training of the algorithm,鈥 paving the way for artists, writers, and other content creators to seek redress, Mr. Tudorache said.
Officials drawing up AI regulations have to balance the risks that the technology poses with the transformative benefits that it promises.
Big tech companies developing AI systems and European national ministries looking to deploy them 鈥渁re seeking to limit the reach of regulators,鈥 while civil society groups are pushing for more accountability, said EDRi鈥檚 Ms. Chander.
鈥淲e want more information as to how these systems are developed 鈥 the levels of environmental and economic resources put into them 鈥 but also how and where these systems are used so we can effectively challenge them,鈥 she said.
Under the EU鈥檚 risk-based approach, AI uses that threaten people鈥檚 safety or rights face strict controls.
Remote facial recognition is expected to be banned. So are government 鈥渟ocial scoring鈥 systems that judge people based on their behavior. Indiscriminate 鈥渟craping鈥 of photos from the internet used for biometric matching and facial recognition is also a no-no.
Predictive policing and emotion recognition technology, aside from therapeutic or medical uses, are also out.
Violations could result in fines of up to 6% of a company鈥檚 global annual revenue.
Even after getting final approval, expected by the end of the year or early 2024 at the latest, the AI Act won鈥檛 take immediate effect. There will be a grace period for companies and organizations to figure out how to adopt the new rules.
It鈥檚 possible that the industry will push for more time by arguing that the AI Act鈥檚 final version goes further than the original proposal, said Frederico Oliveira Da Silva, senior legal officer at European consumer group BEUC.
They could argue that 鈥渋nstead of one and a half to two years, we need two to three,鈥 he said.
He noted that ChatGPT only launched six months ago, and it has already thrown up a host of problems and benefits in that time.
If the AI Act doesn鈥檛 fully take effect for years, 鈥渨hat will happen in these four years?鈥 Mr. Da Silva said. 鈥淭hat鈥檚 really our concern, and that鈥檚 why we鈥檙e asking authorities to be on top of it, just to really focus on this technology.鈥
This story was reported by The Associated Press.聽AP Technology Writer Matt O鈥橞rien in Providence, Rhode Island, contributed.