Much ink is being spilled these days over the where the liability lies when an autonomously piloted car has an accident. I don’t really understand the fuss here. Vehicles are made of components of which artificial intelligence may be one. If a component of a vehicle fails and causes an accident no doubt the vehicle manufacturer and the component manufacturer are open to liability in negligence. If the driver does not adhere to an appropriate standard of care (whether or not artificial intelligence is a component in the vehicle), no doubt they are open to liability in negligence. The requirements to make out a negligence case under Canadian law (duty/standard of care, causation and reasonable forseeability (similar tests apply under the laws of other jurisdictions)) can no doubt handle autonomously piloted cars and their AI components.
Indeed considerations of negligence for automatic piloting systems have been around since the last century, as this article reviews. One can imagine, however, that not all crashes are easy to explain and therefore (see page 48 of the article), injured parties sometimes invoke the doctrine of res ipsa loquitur. This is old-timey speak for “the thing speaks for itself” which in fancy lawyer speak translates into if-an-unusual-injury-causing-thing-happened-and-one-party-had-the-entire-control-of-the-situation-please-judge-assume-this-was-negligence.
Which leads to a thought. If an accident occurs between a car that is autonomously piloted and one which is not, and we are able to demonstrate that the capabilities of the autonomous pilot generally far exceed those of a human driver in terms of safety and decision making, then perhaps most of the ink spilled about liability is on the wrong side of the ledger. One might well argue that the doctrine of res ipsa loquitur in such a circumstance points towards the assumption that, absent a malfunction in artificial intelligence, it is the human driver that is negligent.*
*Today’s post sponsored by Knight Industries.