The European Union’s High-Level Expert Group on Artificial Intelligence released a document entitled Policy and investment recommendations for trustworthy Artificial Intelligence at the end of June. It’s a lengthy read in two significant chapters: I) Using Trustworthy AI to Build a Positive Impact in Europe, and II) Leveraging Europe’s Enablers for Trustworthy AI. In the end there are 33 recommendations and 11 key takeaways . The takeaways are a lot to digest as they cover everything from empowering humans, uniting research capabilities, upgrading education and a developing a 10 year holistic strategy. Sometimes less is more and one has to wonder if some good suggestions might get lost in the forest here.
I’m going to comment on and mention a few of the items in here that pertain to regulation.
First off, item 2.4 is “Introduce a mandatory self-identification of AI systems” where there could be a likelihood that an end human user might believe that they are interacting with a person and not a machine. I agree with this, of course, since I came up with it first.
Also found in my Collected Principles of AI Regulation at point 5 is a statement that is pretty close to this:
In addition, we urge policy-makers to refrain from establishing legal personality for AI systems or robots. We believe this to be fundamentally inconsistent with the principle of human agency, accountability and responsibility, and to pose a significant moral hazard.
It’s nice to be right, or agreed with, or validated, or something, even if no one knows. I live in the shadows.
The paper suggests a review of the current regulatory regime to identify gaps so that AI can reach maximum benefit and minimise risks. Policy and regulations should be on a risk based approach. “[n]ot all risks are equal” and therefore the regulatory response should be proportionate, including using a precautionary principle where the risk involves great harm. In lay terms, that means banish anything we are scared of. Regulation, the paper notes, should consider the level of AI autonomy and aim for outcome based policies. And one size doesn’t fit all–the high level expert group recommends specific regulations based on the field in which AI is deployed. Ensuring civil liability that ensures adequate compensation is a goal, as are strong consumer protection laws. Criminal laws are mentioned: “consider the need to ensure that criminal responsibility and liability can be attributed in line with the fundamental principles of criminal law.” I’m not sure what that means. Attribution of criminal liability seems like an indirect approach to criminal liability which sounds not on all fours to my ears with basic criminal principles such as mens rea and actus reus. They let that one hang out there a bit without any additional colour so I don’t really know where they thought they were going with it.
There’s also this proposed chore:
We recommend a systematic evaluation of the extent to which existing institutional structures, competences, capacities, resources, investigation and enforcement powers arising under existing legislation are capable of adequately ensuring meaningful and effective information-gathering, monitoring and enforcement of legal standards in ways that provide proportionate and effective protection.
I don’t really disagree with this either, but…
…we’re gonna need a bigger bureaucracy.
I guess I can’t complain. I’ve written a few times that really people in governments should just get on with it.
Section 30 considers making the EU into a harmonized regulatory field (wasn’t that the point of the EU?) without national variations but with a pan-European institutional structure. One gets the sense that if some of this is paid attention to and engaged, there will be a bit of a regulatory or standards battle between what the EU produces and what the US produces in the legislative front, assuming they don’t align.
Overall it’s a long but decent read. There are some nuggets in it and I’ve tried to highlight a few. I think they might have distracted from those by having so many recommendations without prioritizing any.