legal theory, Regulating AI

On angst and AI

onego

This consideration of AI and its impact(s) also references much of the angst that is pervasive across contemporary discussions about how AI will affect society and how it should be regulated to prevent negative outcomes. The concluding paragraphs are full of interesting concepts on that point.

“Self-regulation by the industry using a “moral and ethical compass” is the way forward, says Kau. He notes that governments also need to play a bigger part in ensuring the ethical use of AI.

For Chan, the bigger question is: “As AI brings about change and disruption, is society prepared for that? The governments, the education [systems] and society have a role in preparing people for this new world.”

Society will ultimately decide whether it will use AI as a force of good or evil, not the technologies themselves.”

Let’s start at the end first. How would society decide whether AI will be used for good or evil? Is there any other technology that we treat that way? What about cars? People may think that cars are good or evil and we certainly don’t let the technology of cars itself make that determination, but we also don’t ultimately task society with making a decision about the good or evil of that technology (or at least we don’t think about it in such holistic terms). Rather we regulate the use and capabilities of cars to what we perceive to be the best outcomes. Although AI maybe a significantly different technology than what we have dealt with before, perhaps it is not the best approach to throw the baby out with the bathwater. We could just as easily regulate, as we have in the past, towards the best outcomes without holistic considerations.

Back to the beginning. What does it mean to say that there should be self-regulation via a moral and ethical compass. Are these morals and ethics innate or must they be codified after debate? What happens if such self-regulation is breached? The statement that AI must be guided by a moral and ethical compass presumes that the compass is some sort of law for AI and thus leads further to the question, in this case what is law? Herewith a cheap reference to different considerations of what law is. The point being that the statement that there needs be a moral and ethical compass begs the question on the basis that it presumes some set of morals and ethics for AI exists.

And lastly, stuck in the middle, the role of government. Proposed to be responsible ensuring ethical compliance and preparation for change. I would like this. But considering the pace of change in AI technology as compared to the responsiveness from government, does this seem likely?

No wonder people are angsty.

 

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s