Can AI outsmart Europe鈥檚 bid to regulate it?
Loading...
| London
鈥淟andmark鈥 was the headline of choice, and little wonder. After months of discussion and debate among politicians, pundits, and pressure groups worldwide, a group of legislators was finally taking regulatory steps to address the potential dangers of artificial intelligence.
And not just any legislators. Following a series of marathon meetings, the Parliament of the European Union 鈥 the world鈥檚 largest bloc of free-trading democracies 鈥 had reached agreement with representatives of its 27 member states on the draft text of the Artificial Intelligence Act.
Last Friday鈥檚 announcement, however, also drew attention for the twin wake-up calls it sounded.
Why We Wrote This
Artificial intelligence is changing people鈥檚 lives at a dizzying pace. Will new European Union regulations designed to make AI 鈥渢rustworthy and human-centric鈥 work?
First, it brought home how difficult it is proving for governments to place effective guardrails on the dizzyingly rapid expansion of AI. The EU began working on its AI strategy in 2018, and the new law won鈥檛 take full effect until sometime in 2026.
Yet it also homed in on the main reason that task is becoming more urgent: the impact already being felt on the everyday lives, rights, and political autonomy of individual citizens around the globe.
The EU鈥檚 purpose is explicit: ensuring 鈥渢rustworthy, human-centric鈥 use of AI as ever more powerful computer systems mine, and learn from, ever larger masses of digital data, spawning an ever wider array of applications.
The same technology that may now allow researchers to unlock the mystery of a virus could help create one. Large language models such as ChatGPT not only can produce fast, fluent prose from billions of words on the internet. They can, and indeed do, make mistakes, producing misinformation. And that same huge store of data can be abused in other ways.
One key individual-rights concern for the EU legislators was the prospect that AI could be employed, as is the case in China, to surveil and target citizens or particular groups in Europe.
The new law bans scouring the internet for images to create face-recognition libraries, as well as the use of visual profiling. The police would be exempted, but only under tightly defined circumstances.
More broadly, though the exact wording of the law has yet to be published, it will reportedly ensure that people are made aware whether the words and images they鈥檙e seeing on their screens have been generated not by humans, but by AI.
Among systems to be banned outright are any 鈥渕anipulating human behavior to circumvent free will.鈥
The most powerful 鈥渇oundation鈥 AI systems 鈥 the general-purpose platforms on which developers are building a whole range of applications 鈥 will face testing transparency and reporting requirements, obliged to share details of their internal workings with EU regulators.
All of this will be enforced by a new AI regulatory body, with fines for the most serious violations as high as 7% of a company鈥檚 global turnover.
Still, the laborious process of producing the AI Act is a reminder of the head winds still facing efforts to place internationally agreed-upon guardrails around a technological revolution whose reach transcends borders.
In the world鈥檚 major AI power, the United States, President Joe Biden issued an executive order in October imposing safety tests on developers of the most powerful systems. He also mandated standards for federal agencies purchasing AI applications.
His aim, like the EU鈥檚, was to ensure 鈥渟afety, security, and trust.鈥
Yet officials acknowledged that more comprehensive regulation would need an act of Congress, which still seems far from agreeing on how, or even whether, to legislate limits.
One obstacle is the AI companies themselves. Though they acknowledge potential perils, they have argued that there is a risk that overregulation could limit the growth of AI and reduce its benefits.聽
And would-be regulators also face geopolitical obstacles, especially the rivalry between the U.S. and China.
One sign has been Washington鈥檚 move to limit Chinese access to the latest, specialized computer chips key to building the highest-powered AI systems.
And that touches on a wider national security issue: the growing role of artificial intelligence in weapons systems. Drones have played a major role in Ukraine鈥檚 war against Russia鈥檚 invasion and in Israel鈥檚 attacks on Gaza. The next evolutionary step, military analysts suggest, could be AI-powered 鈥渄rone swarms鈥 on future battlefields.
The priority of the U.S. is clearly to seek an edge in AI weaponry 鈥 at least until there is a realistic hope of bringing China, Russia, and other high-tech military powers into the kind of agreements that, last century, helped limit nuclear weapons.
The EU鈥檚 new law does not even cover military applications of AI.
So for now, its main impact will be on the kind of 鈥渢rust鈥 and 鈥渉uman-centric鈥 issues that European authorities and Mr. Biden both highlighted: letting people know when words or images have been created by AI, and, the lawmakers hope, blocking applications that seek deliberately to manipulate users鈥 behavior.
Still, that could prove important not just for individuals but also for the societies they live in 鈥 the beginning of a fight against the use of AI to 鈥渁mplify polarization, bias, and misinformation鈥 and thus undermine democracies, as one leading AI expert, Dr. De Kai, recently put it.
The historian Yuval Harari has voiced particular alarm over AI鈥檚 increasingly powerful ability to 鈥渕anipulate and generate language, whether with words, sounds, or images,鈥 noting that language, after all, forms the bedrock of how we humans interact with one another.
鈥淎I鈥檚 new mastery of language,鈥 he says, 鈥渕eans it can now hack and manipulate the operating system of civilization.鈥