I set up a whole blog about regulating artificial intelligence and some guy suggests that’s impossible.
It’s understandable why people feel that way, really. To some extent the task seems fairly impossible in that there are a myriad of areas to consider and the use of technology seems to be far outpacing any kind of government response. The article suggests we may be entering an era of “outdated laws” . But are we though? Some of the oldest laws seem to cover matters that are still pertinent today. Should we throw up our hands and assume that legal principles of slander, trade and liability cannot apply to automated forgery, business world disruption and accidents. And why have we assumed that all outcomes of AI are negative; what if driverless cars lead to fewer accidents? What AI implementation currently drives the fast-tracking of wealth inequality in some way which is not addressed by current legal systems?
It is true, highly complex machines are difficult to understand and largely beyond the comprehension of a single individual given the multitude of components. And yet, there we are setting out guidelines on how to address these technologies and sorting out which levels of government should be addressing which issues. Or having our regulatory organizations review and consider how to address the impact of automated technology on markets and market conditions.
It is fair to say that we have not regulated AI as a whole (and fair to say that there are few updated laws that address it directly). But that is also a bit like saying that there is no single law that addresses the construction of a house. There are many laws and regulations that address the construction of a house, even if home construction technologies change all the time.
You still eat an elephant one bite at a time. I mean, assuming you want to eat an elephant.