Today’s post is the third principle in my Collected Principles of AI Regulation. I borrowed it from a rule that governs the conduct of lawyers — see 7.6 in the Law Society of Ontario’s Rules of Professional Conduct which roughly translates to lawyers should rat each other out and not truck with a lawyer who is non-compliant with their governing regulations. Principles are the fundamental basis of rules and regulations and so they should have some teeth behind them. So I’m proposing:
No AI shall interact with any other AI which does not adhere to the other governing principles.
Note I didn’t say that no people shall interact with non-compliant AI. That’s because this are principles regulating AI, not people.
Many articles and current thought (I won’t post to links here, you can find references in other links throughout the blog) suggests the need for self-regulation of AI among those producing it. That’s fine, such as through the pledge not to build autonomous lethal weapons. What I’m suggesting is that a little ostracizing can make those pledges and self-regulation more effective.
Maybe this should have been the final principle, but I came up with it now so it’s number 3.