And so do people who want to get on with things. We haven’t seen a lot of active legislative or governance effort on the AI regulation front despite a lot of papers and high level thought around it. Predictably then, people who want to get on with things, get on with things.
In the linked paper a group of IBM researchers have come up with a proposed AI supplier’s declaration of conformity (SDoC). The stated purpose is to increase trust in AI. Increasing trust and certainty in outcome is a large part of regulatory law. So if there isn’t going to be much law making from governments, no surprise that someone would come up with their own plan. The lead-in paragraphs to section 3 discuss this in detail and why new technology needs a framework to gain trust, which, once gained, leads to general acceptance of the technology.
Appendices A, B and C are the proposed SDoC and a hypothetical example of a completed SDoC. One can easily imagine this kind of disclosure being added to or required in respect of securities continuous disclosure regimes or reporting to financial institution governing bodies.
Much of the proposed questionaire would be easily completed in lay terms. Some of it, such as a request for a disclosure of algorithms used with a suggestion to reference technical papers would not be. And indeed much of the information provided likely could not be verified by a regulator unless the regulator held significant technical expertise. One wonders just how incredibly far behind regulators are at this point.
Apparently, for the time being, the law does not abhor a vaccum.
Photo by Pixabay on Pexels.com