If we are going to deal with AI regulation, rather than just talking in vagaries about the importance of ethics or the impending doom of inate bias or the imperative for AI education, maybe we should have a list of actionable principles which can form the basis of concrete and useful regulation. (And maybe they should not all be cribbed from Isaac Asimov.)
I’m going to keep a running list of good ones that I beg, borrow or steal or in some highly unlikely scenario come up with myself. Here’s the first one I’ve got that I’ve stolen from this article:
Human intervention must remain paramount
I was going to change the words from the subhead in the article to say that there must always be a controlling mind, but in some ways that would defeat the purpose of using AI. Thus I think it is better to stick with “intervention” which means that the AI can run and run and run like Ruprecht, as long as some human at some point can say stop. I’ve also changed slightly it to make it an imperative.