This workshop summary has some interesting and practical (by which I mean things you could actually put into practice) ideas about how to engage in AI regulation.
The suggestions of having the ability to look back to see how AI made decisions or acted as well as generalist investigative or oversight bodies are not ones I had heard before. Hidden in the article’s photo caption is a qualification that those should be engaged at some sort of materiality threshhold which sounds reasonable; I’m not sure I want the NTSB of AI descending on my house if an automatic coffeemaker overflows. On the other hand, I am not sure that the suggestion of considering software developers as having the same regulatory licencing or liability standards as doctors, lawyers or other professionals is necessary. The same controls that this would acheive might be acheived through specific oversight in industry regulation; in other words through the regulatory bodies that govern securities, banking, insurance and privacy for example.
We hear a lot about AI ethics on the twitters and at the conferences and such. Interesting to hear someone in the industry note that such a focus is distracting from regulation. I don’t think it has to be fully a distraction but if we are not engaged in actual regulation it certainly is a limiting point of focus. It may also not really be an industry ruse; industry doesn’t promulgate regulation so it might just be how the industry considers nascent regulatory questions.