A couple of days ago I began a list of principles of AI regulation . I am unashamed to say that most of these are likely to come from other sources. But the second principle I propose is that all AI must fail the Turing Test (which was a clumsy artful way of saying that AI must disclose itself as AI if it is not obviously AI).
I swear I came up with that one myself, but lo and behold it turns out there is a bill before the California legislature which is analgous to that principle. Mine is a little broader in that the current draft legislation applies to matters online (“public-facing Internet Web site, Web application, or digital application, including a social network or publication”) and to bots that intend to deceive. Three thoughts about that: 1) why would a bot launched with the intent to deceive also make disclosure as to its botness; 2) the disclosure only requires disclosure of the bot’s botness, but not as to its intent; and 3) since the whole disclosure is triggered on intent, there would presumably be argument as to a bot’s intent from folks who wanted to avoid disclosure.
“Bot” is defined to mean an automated account or platform which is designed to mimic or behave like a person, which creates another threshold that would likely get disputed. It’s interesting that the independence of the actions of the bot are not a requirement, which in some ways makes it applicable to more than AI.
All in all though much narrower than my principle and subject to a lot more nuance. I prefer mine because it is more straightforward and simple to apply.