The European Union is planning to regulate the use of artificial intelligence. On June 14, the parliament will discuss a draft law that calls for all AI-generated content to be marked as such and suggests dividing AI applications into different risk categories.
Should it pass, the law would include plans to ban high-risk systems that analyze and predict people's social behavior, and limit the scope of other high-risk systems;
Simple applications, such as the much-discussed text generator ChatGPT, would not be greatly limited.
According to the draft law, companies that want to sell risky AI applications in Europe will have to meet strict requirements and set up risk management for their products.
Data used to train the AI programs will have to be checked, and people who provide data will have to be told what the information is to be used for.
European standards
In April, Italy briefly banned ChatGPT over data protection deficiencies and only lifted the ban after OpenAI, the Californian company behind the AI text generator, made certain changes.
During a recent visit to Germany, the head of OpenAI, Sam Altman, warned against overregulation. At one point, he even threatened to pull out of Europe, but subsequently withdrew that statement.
Altman now agrees that rules for artificial intelligence are good in principle, but he has called for clarity. But he remains a key figure in the development of AI, and was even received by Chancellor Olaf Scholz in Berlin.
René Repasi, a German member of the European Parliament, is relaxed about the lobbying work being undertaken by OpenAI and its parent company Microsoft. He thinks the European market is far too attractive for AI companies to ignore.
"Anyone who wants to sell their AI here must comply with our standards," said Reparsi, Social Democrat spokesperson for AI issues in the European Parliament.
The US Congress is also trying to enact rules for self-learning machines, and Reparsi said EU lawmakers have been in contact with their US colleagues. "At the end of the day, we want to create meaningful standards and not compete against each other," he told DW.
Warnings
One of the leading innovators of artificial intelligence, former Google employee Geoffrey Hinton, is among those currently offering dire warnings about the dangers of the technology.
He has argued that AI could soon be more intelligent than the people who created it and that the impact on the labor market could be significant.
Even developers and senior managers at Microsoft and Google have admitted they no longer know exactly how applications with AI work.
Several open letters from various researchers and entrepreneurs, including Twitter head Elon Musk, have suggested different ways to limit AI development.
EU law within two years
The European Union's AI law wouldn't realistically come into force until 2025, as it needs the approval of all 27 member states as well as the parliament.
Given the rapid development of applications like ChatGPT, the technology could well have changed and improved a lot by then, said Axel Voss, an EU lawmaker from Germany. "The development is so fast that a lot of (the regulation) will no longer fit by the time the law actually takes effect," he told DW in April.
Voss has been working on artificial intelligence for the conservative Christian Democratic group in the European Parliament for years and is one of the co-authors of the EU's "Artificial Intelligence Act" draft.
"Actually, we need — for competitive reasons and because we are already lagging behind — more optimism in dealing with AI," he said. "What the majority in the European Parliament seems to be saying suggests they are driven by fears and worries and are trying to be very restrictive."
According to Reparsi, the law on AI should be flexible. The question of what is high-risk or less risky with AI should not be regulated in the actual legal text, but in the appendix, so it can be changed quickly and adapted to technological developments.
The Technical Monitoring Associations in Germany, the Federal Office for Information Security and the Fraunhofer Institute for Intelligent Analysis and Information Systems are considering introducing an "AI certificate," whose applications would have to be certified by independent experts.
Applications such as a self-driving car or robots used in medical surgery require great trust in AI from the general public, which could be established with such a certificate system.