Previously I described Google’s white paper on AI regulation, released not long ago. Here are a few more thoughts about it (you’ll have to have reference to the paper as I mention some specific items in it):
- Many of these discussions on AI regulation still begin with the premise that we know what we mean by AI. “AI has now become a real-world application technology and part of the fabric of modern life” according to the white paper. But which algorithms and software are AI and which are not? If we are going to regulate AI, we will have to define it at some point. Skipping past this definitional issue is a weakness when talking about principles of regulation and though this white paper is well thought through, it is a weakness here.
- The recognition in the white paper that many existing legal rules will cover AI is a strength. It recognizes that we have many building blocks to start from, especially since as the paper notes we are at the early stage of AI develoment.
- Recognizing that the reponse must be supra-national is reasonable. Box 2 in the papers through examples of such responses to new technology. Pleasingly, this is not just a knee-jerk reference to international bodies issuing more white papers and holding more international conferences. In contrast, successful past applications of self-regulation, collaborative national regulations between governments and treaties are identified. Patchwork reactions will be problematic, as the paper notes.
- Explainability Standards are the first area the paper explores. Requirements for explainability in AI have become a popular topic for discussion. I remain unconvinced as to that being a key principle of regulation. Is explainability of a vehicle’s performance important for automobile regulation? Is explainability of a firearm’s capabilities critical for gun control legislation? Were any of the examples of regulatory successes in dealing with new technolgies noted in Box 2 of the paper premised first on explainability? Frankly the discussion on this point sounds a bit unconvinced as well and spends quite a bit of time on the problems explainability creates rather than solutions. There is a big difference between providing explainability to a sophisiticated regulator and providing useful explainability to a layperson who just wants their credit adjudicated by a financial institution and isn’t particularly aware that AI is involved. As in any disclosure statement, there is also potential for liability if explainations are claimed to be unclear, innaccurate, incomplete or misleading. A potential solution would be some kind of safe-harbour rules that create explainability standards in disclosure that companies can adopt rather than having to come up with their own.
- And on that last point, on Fairness Standards (the second major area of exploration) the paper suggests that governments and civil society could clarify prioritization of competing factors to ensure fairness is met. This is a good idea in addressing bias concerns as it allows applicators of AI to know what they standards they should adhere to. This is common in this area, such as in human rights legislation, which sets out what categories of matters can be considered for discrimination and, in some cases, where discrimination is permitted (for example in remedial cases). Box 6 contains some suggested Google tools for evaluating fairness (in the sense of lack of bias). I’d say that some of these tools actually go a long way to help provide useful Explainability Standards.
- Safety Considerations are the third area covered and are probably beyond my ability to comment. Obviously they are important but the discussion here moves into the technical. That’s an interesting change in the focus because technical responses are not necessarily regulatory responses. I’d also note the break-eggs-make-omlette thought on overfocussing on safety in regulation which has surfaced. There is a hint of this as well in this paper as it notes that detering AI may have an opportunity cost. It also gets pretty difficult to handle from a regulatory standard; the paper suggests “[i]f the damage from any errors is minimal…it may be deemed OK to use AI which falls below the human levels of accuracy.” But this gets into a if-a-butterfly-flaps-its-wings-in-Tokyo-what-time-does-the-hurricane-make-landfall-in-Miami problem. A very small error in one area that seems innocuous could have larger implications if that result is used as a formative piece of information or decision making for other larger issues and this may not be evident at the time. Safety certification through self-assessment is recommended. I’d say this is a low bar. Maybe that’s ok for now.
- Human-AI collaboration is a really interesting section of this paper. Some people would say humans are critical. I would say for every one of those stories you can find one that says the opposite. But it is probably true that we should have some standards at present for human control, or at least monitoring, testing, and disengaging.
- Page 26 has two interesting things on it. One is Box 14 which talks about whether not AI should be given legal personhood. Bullet 1 is “It is unnecessary” which is both correct and also the point they should have stopped and ended the discussion on that. The rest of the page is entitled “Lability Frameworks” and starts a good discussion on the point. The discussion suggests that perhaps existing liability frameworks are not enough either because causation will be hard to prove. Joint and strict liability of all actors in a network is a possible approach but it is noted as potentially chilling or placing targets to deep pockets. A suggested approach is sector specific regulation and safe harbout provisions to encourage innovation. While the discussion on legal personhood might have stopped with “No”, I think this discussion on liability frameworks is a good starting point but needs a lot more exploration. At the end of the day, regulation is not much without a consequence and it will have to be determined whether those consequences will be through our current legal principles of tort claims, fines, a combination of those, or some specific schemes that aim to keep AI practices in a box but not stifle exploration, at least for some period of time.
- I began with noting what I think is a material ommission in not defining AI or at least not adverting to that. I’ll end with one other. There is not much in the paper (is there anything?) about regulation of use of data or access to data. I don’t see how you can have a full discussion on AI regulation or governance without reference to that. It’s a bit like trying to talk about regulation of electricity production and leaving out discussions on or at least reference to distribution. Clearly that is not the focus of Google’s white paper but I think there should have been at least some consideration that AI Governance and data governance must go hand in hand.