The ITU (the International Telecommunication Union is the United Nations’ specialized agency for information and communication technology) has a paper out authored by Urs Gasser (Executive Director of the Berkman Klein Center for Internet and Society and Professor of Practice at Harvard Law School) Ryan Budish, Amar Ashar,members of the Ethics and Governance of AI initiative by the Berkman Klein Center for Internet & Society at Harvard University and the MIT Media Lab called Setting the Stage for AI Governance: Interfaces, Infrastructures, and Institutions for Policymakers and Regulators.
It is nice to see some thoughts about this at a non-nation state level. Policy and regulation is going to have to have some coordination internationally. As I’ve written before, wildly differing standards between nations will cause issues.
I won’t go all the way through the paper here but will briefly consider the three premises that the paper starts from in suggesting how governance will be dealt with:
- Unknown societal impact–we don’t know what the full effect of AI will be on society.
- Undefined question–we don’t know what questions to ask.
- Diversity of frameworks–AI is developing in areas that already have norms and governance.
There is a fourth premise at the start, right at the second paragraph, which is not enumerated. It suggests that AI is a lot of different things and therefore doesn’t really define it. As I’ve said before, I think we need some kind of working definition, even if it is a loose one. If everyone is afraid to try to talk about what AI is (I feel a fourth Collected Principles of AI Regulation coming on here), it’s going to be pretty difficult to regulate successfully. Like that time that no one would speak Voldemort’s name.
The immediate conclusion of the other three premises is that a “comprehensive and detailed governance framework for AI seems unrealistic–at least for the time being”. Probably true, but we don’t have to get to perfection on day one. I have a few responses to the premises:
- We don’t know the societal impact but there are lots of things we don’t know the full societal impact of. AI just seems to be one of those things that we can point to in isolation as being a specific item as opposed to being on a continuum of events (though it really is on a continuum of events). We should not be daunted by not knowing its full effect. Regulations can always change.
- It’s true we don’t know what questions to ask but some of the examples given in the paper (like what is fake news) are very granular. If we ask less specific questions, they will be easier to handle in initial regulation.
- It is correct that in commercial applications AI is trying to emerge in areas that have norms and governance and this should make things easier because we can borrow from these areas not only in respect of that governance but also to answer questions 1 and 2.