Kind of a scary logo, no?
The US National Institute of Science and Technology has issued a paper entitled Four Principles of Explainable Artificial Intelligence and has asked for comment.
The paper is premised on the idea that explainable AI increases trust in AI. It suggests that the fundamentals that contribute to explainable AI are:
“Explanation: Systems deliver accompanying evidence or reason(s) for all outputs.
Meaningful: Systems provide explanations that are understandable to individual users.
Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output.
Knowledge Limits: The system only operates under conditions for which it was designed
or when the system reaches a sufficient confidence in its output.”
The paper is fairly short and goes into greater detail on each of these points with significant sources. Comments may be submitted until October 15, 2020.
I was struck, as a lawyer, by the discussion on the AI being meaningful and understandable to individual users. The paper appropriately considers that different users may understand the AI in different ways. This could depend on the users’ background (forensic practitioners vs jurors for example) or the specific end user or groups thereof (lawyers vs jurors for example). It notes that tailored explanations may be required at the individual level as two humans may not interpret AI output the same way due to their prior knowledge and experiences and that in fact as they gain experience, what they consider meaningful may change. This seems like a complicated approach to me. Does the AI have to be meaningful to all of these groups? All the time, as their experiences change? Perhaps we could have resort to the old standby and satisfy ourselves on this point if the AI was meaningful to the reasonable man on the Clapham omnibus.