Here’s an article about panicking less over killer robots that agrees with me, right down to the CIWS. It’s nice to find that other people agree with you but it did get me to thinking a little bit more. It is very fair to suggest that in fact super killer robots will do a better job than adrenaline filled soldiers. Perhaps there will be fewer wrong targets hit. But I also thought of this guy, who is credited with stopping a nuclear war.
Queery-54 [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)%5D, from Wikimedia Commons
I think a question arises, where do we put the oversight in AI regulation. It’s fine to give a CIWS free reign. It doesn’t have a lot of overall impact. But what if Stanislaw had got it wrong? Do we always assume that human oversight is better, even if it increases the chance of error? Is that a basic principle of AI regulation? Or do we run some kind of test that proposes to determine at what point we decide whether to err in favour of the computer’s judgement or the human oversight. As noted in my last blog entry, a Fed Reserve Governor talked about how we have to tailor regulation to risk, ie that as the risk increases, so should the regulation. But what if the increased regulation increases the risk? What if an increased time to response by encouraging human oversight is in fact a negative? It’s a difficult philosophical question, especially when considering lethal situations. But we have encountered this already—most of you probably drive a car with anti-lock brakes. Perhaps a highly skilled driver can exceed the performance of anti-lock brakes. But most drivers cannot. Which do we prefer on our roads in mass adoption? And which do we prefer in high pressure race situations? And do we have any real regulatory basis for distinction between the two other than experience and trial and error?