Kaljulaid in Munich: EU's AI legal act best for global governance, minus fines

Kersti Kaljulaid said that digital risks are as existential for us today as nuclear proliferation was for the generation before us. She said the EU's new AI law should become a framework for a global collaborative effort to mitigate these risks, with Big Tech certainly being involved in legal policy in the future and, and so gargantuan fines have to be eliminated to ensure transparent collaboration.
Former President of Estonia Kersti Kaljulaid discussed geopolitics and AI at the Munich conference on Friday, February 16.
The debate highlighted today's reality that the EU has its own legal approach to AI governance, the US has another, China has a third, and the UN is now catching up.
"China, as you know, loves to produce, the EU loves to regulate, and the US loves to innovate. But where is the rest of the world in the bigger picture?" Nighat Dad, executive director of the Digital Rights Foundation, emphasized the perspective of the Global South.
Sundar Pichai, Chief Executive Officer of Google, gave the corporate and transatlantic perspective, while Kersti Kaljulaid discussed Europe's distinctive governance of new technologies.
"Governments were the first to know about innovations in the past, but now they must catch up with the private sector in terms of defense and security developments," she said.
"So right now, we are just adding this element of legality – onboarding the private sector into our international, multilateral legal process. The number of governments is far bigger than the number of companies that are that big in tech. I think in 10 years they will be participating at a comparable level in our legal space setting," she added.
"And I think this is something we need to get our heads around. It's also the best way to do it; we're not able to do it without them. They [tech giants] are not so numerous, and these responsible companies who would work with us could help us to uphold this new international digital world order."
"The trick here is, of course: can we trust that despite all the geopolitical competition between, let's say, the US and China, they would come together and agree and control AI the way we think it should be controlled?" she said.
She said digital risks are as existential as nuclear threats were in the past, when governments succeeded in curbing the weaponization of nuclear power. The AI regulation needs to be adopted globally by our generation with the same urgency and seriousness.
"In the case of nuclear proliferation, somehow governments were able to come together and actually overcome the difficulties and say this is a real existential threat, so we must be able to cooperate now as well."

"In addition to the US-China [tech standards] competition, there are non-government or malign actors. They are far more difficult to control, and of course they can rely on small language models, which are more specific, less energy-consuming, and far more difficult to detect."
"So the only way to try to keep it under control now and hopefully 10 years from now is to monitor the energy consumption and try to control where the chips are going. I don't see any other way. That is the real danger. The small irresponsible groups that have some SLMs instead of LLMs," she said.
The moderator, Ian Bremmer, asked Kaljulaid whether she sees it as something that should be done at the top levels of the government, driven by Beijing and Washington, irrespective of the administration, and whether the Europeans necessarily need to be a part of that.
She said that the European AI Act so far has proven to be the most comprehensive and has only a single flaw, according to her: fines.
"Europe has always been the champion of testing regulation, and it is valuable in this process. When I think about the EU AI Act, I'm very happy about it; it's the most comprehensive. The US has made many attempts at legislation as well, but the EU act is most comprehensive now," she said.
"There's just one element that I'm concerned about: quite high fines. And if you go and say to somebody, 'If you don't follow my rules, I'm going to fine you $35 million, or 7 percent of your revenue,' /.../ then we risk that we cannot really achieve what is envisaged by the regulation, which is supposed to onboard the private sector, because they would not be open with you."
"Estonia's government relies very much on digital tools. And for us, it's kind of vital to keep the system running that our critical infrastructure tech talks with the government openly about their cyber risks and actual attacks and failures in their cybersecurity. So we don't fine them. We don't. And it works very well."
"So I would advise that the AI Act could be the basis for this common effort to come together to make sure that AI is kept under control. But I would say minus, minus the fines. That's a huge problem right now in the system."
Pichai, the chief executive of Google, said that it's a tricky balancing act to see what Europe has to achieve. "I think it's important for Europe to keep the balance as they proceed here," he said.
"Europe was definitely at a disadvantage with fragmented regulation on the internet across many different countries. /.../ So I think it's important. It's a transformative moment. It'll affect every sector, including the competitiveness of Europe as a whole. You have to get it right where you're promoting innovation and companies can adopt AI, while making sure there is responsibility to go with it."
--
Follow ERR News on Facebook and Twitter and never miss an update!
Editor: Kristina Kersa