Principles of AI regulation, Regulating AI

No fate but etc. etc.


I mean, any opportunity to go with that photo, right?

You may well have heard now about the Lethal Autonomous Weapons Pledge that came out a couple of days ago. If you have not, well, as you can imagine a number of high profile people, organizations and companies pledge that The Terminator shall not become a documentary. Fair goal.

A couple things about it.

1. It doesn’t define AI. Someone around here has been talking about that. And that creates a bit of an issue here. We already have autonomous capabilities in weapons, such as the ability of cruise missiles to follow terrain and maps. Presumably, we don’t intend by this pledge to catch that kind of technology that has been around for some time. Some additional qualitative nature of the weapons must be inferred here or it will be difficult to argue when the pledge has been violated. The reference to “life-taking decisions” gives a hint. But at what point is that enough. If a launched missile reads an IFF transponder (or lack thereof) on its own and determines a course of action, is that enough to be AI? Or does it have to make the decision to launch on its own? Lots of room for debate in this policy because the whole AI definition is left wide open.

2. Is it always unethical for AI to make a decision to take a life? Is it always worse for AI to make a decision to take a life? What about a fully automated defensive weapon like a CIWS. If it were to make on its own a decision to take a life to save others, is that unethical and worse than what we have today? What if I gave an AI weapon orders to on its own decide to launch and kill anything in a set of grid coordinates that was wearing a helmet? Terrible right, because it might kill non-combatants. Totally an exercise in “removing the risk, attributability, and difficulty of taking human lives” which the pledge warns us of. But is that actually ethically worse than sending this to the same grid coordinates:


3.  What’s the focus on lethal all about? If a machine uses its total discretion to select me as a target and attack me but only to the extent of removing all my limbs and blinding me permanently, is that somehow better, ethically speaking, and something that doesn’t need to be the subject of a policy on AI regulation?

4. It is nice though to see (note the penultimate sentence) the recognition of the total absence of governance or regulatory action being recognized. Flaws aside, good to start somewhere. Push Judgment Day off by a couple weeks at least…




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s