I keep rereading this because, as I’ve mentioned in other posts, no one seems to have spent much other time on a legal defintion despite that being a fairly important first step to useful regulation. Everytime I read it, something else pops up in it as interesting.
In this case I was looking at the proposed definition of Artificial General Intelligence. Compare it to part (C) of the proposed definition of Artificial Intelligence:
Systems that act like humans, such as systems that can pass the Turing test or other
comparable test via natural language processing, knowledge representation, automated reasoning, and learning
What’s the difference? If a machine can pass the Turing Test, how is that different from a machine that “exhibits apparently intelligent behavior at least as advanced as a person across the range of cognitive, emotional, and social behaviors”? Does that mean that it is is only Artificial General Intelligence if it has actual emotional and social behaviors and uses them apparently intelligently? Or does it mean that it has apparent emotional and social behaviors? Presumably in some way it has to mean that what is being defined as Artificial General Intelligence must meet a test that is higher than the Turing Test. But how would one actually apply that test?
I think part of the problem here is that the authors were considering something they couldn’t fully conceptualize yet; hence the inclusion of “notional future” as a descriptor in the defintion. Instead of getting caught in that rabbit hole, maybe it would be better to say that Artificial Intelligence is a system that can pass the Turing Test in a particular field or narrow spectrum of exercises, whereas Artificial General Intelligence is a “system that can pass the Turing Test or other comparable test across a broad spectrum of processes simultaneously, including those that maybe considered cognitive, emotional or consisting of social behaviours.”