I wrote last week about an apparent legal definition of artificial intelligence in a Canadian treasury board directive. At about the same time, the EU was rolling out a discussion paper about a potential definition of AI that looks very much like the kind of thing you would start to insert in legal documents.
Here’s the Canadian definition:
Information technology that performs tasks that would ordinarily require biological brainpower to accomplish, such as making sense of spoken language, learning behaviours, or solving problems.
Here’s the EU definition:
“Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions.
As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems).”
OK, so which do we like better? Let’s consider some arbitrary categories of evaluation:
There is no explanation I can find for how the Canadian definition was determined or who came up with it. As I mentioned last week, there is a related definition of information technology in treasury board directives that we will dispsense with for this discussion. So let’s look at a couple of the components. First, the definition suggests that AI must perform tasks that would otherwise require biological brainpower. This leads to the question, what requires biological brainpower. Heartbeats require biological brainpower, but they can also be regulated with pacemakers. Does that make pacemakers AI? Presumably not. Is the use of biological as a qualifier to brainpower important or just excess verbiage? The examples in the definition, “making sense of spoken [why not written?] language, learning behaviours, or solving problems” qualify the nature of the tasks. Thus there is an implication of an analytic nature to the tasks that must be engaged in in order for something to be AI. One presumes the implication of decision making which is inate in the use of language or solving problems though the definition does not say this. Overall this seems to say that AI must be information technology that an organism would otherwise do with the use of higher brainpower and of a standard (based on the references to language and learning behaviours) that is somewhere between a dog learning tricks and a child learning to speak.
We know a lot more about the basis for the EU AI definition because they wrote a few pages about it. On a very basic level, they authors wanted something to be rational, acquisitive of data and actuative in order to be AI. The example they give is of a product that can observe a floor, determine if it is clean or dirty and then decide whether or not to engage in cleaning the floor. The acquired data is transformed into decision making. Because there are several pages on this in the document, I’m writing less on this than on the basis of the Canadian definition because you can just go and read it for yourself. So the basis of the EU definition is laid out very clearly.
Clear win to the EU definition on basis.
Precision and clarity
The Canadian definition is not very precise or clear, using words such as “ordinarily” and “such as” and not defining what it means by “perform tasks”. Not super.
The EU definition is not much better. There are a couple of “or[s]”, a “possibly” and some language around the adaptability of AI systems that is drafted in a way to suggest that AI must be adaptible in the manner of determining how the AI’s prior actions manifested themselves on its environment. I’m not sure that they really meant that latter point–presumably the AI in the model of a floor cleaner could just start again from scratch without having awareness of the impact of prior floor cleaning.
So although one would expect more precision and clarity in the EU definition given the explanatory discussions that went into the drafting, it really isn’t much more precise. Tie.
Imagine being a judge using a piece of legislation that contains one of these definitions. Which would be the easiest to apply if you were trying to determine if something was AI?
Interpreting the Canadian definition would require a lot of extraneous analysis: what requires biological brainpower and what are tasks that rise to the level of the examples given and what is in the nature of those tasks? Expert advise would have to be sought and there is plenty of room for debate. But the path to the analysis is pretty clear because of the brevity of the definition.
On the other hand the EU definition, unsurprisingly since it has multiple authors, is a bit of a word salad dressed with alphabets. There are so many yes/no decision points in the definition that the analysis of whether or something is or is not AI would take a significant endeavour. This is far too complicated to be useful, in my estimation.
Point to the Canadian defintion.
Overall, although it isn’t very clear, I think the brevity of the Canadian defintion goes a long way to making it a better and more usable definition for the legal purposes.
Photo by Gratisography on Pexels.com