As part of its response to the White House’s February 2019 executive order on artificial intelligence, the National Institute of Standards and Technology has issued a plan for federal engagement in developing technical standates and related tools .
Technical standards are notied as being important to drive innovation, public trust and public confidence and to develop international standards to promote and protect those priorities. Technical standards of this, I assume, have to make their way into some facet of law if they are going to have those kinds of effects. Accordingly, this document does have an impact on regulatory content. The theme of championing “U.S. A.I standards priorities in AI standards development activities around the world” appears, as it did in the executive order. Again, to have standards that are useful or able to be adopted internationally, I assume there will have to be some type of codification.
Indeed, although much of the document is unsurprisingly focussed on the development or content of standards by government agencies, some focus is given on legislation. Agencies are directed to know existing statutes, policies and resources on the development and use of standards and to conduct a “landscape scan” and gap analysis to determine what needs to be developed. In addition, we have previously seen concern expressed (see for example A letter from Braavos) that the technical sophistication of AI makes it difficult for regulatory bodies to contend with the speed of how things may progress. This NIST report recommends agencies “grow a cadre of Federal staff with the relevant skills and training”. Overall the stated actions to be followed are:
-commit to deeper, consistent, long-term engagement in AI standards development activities to help the United States to speed the pace of reliable, robust, and trustworty AI technology development by:
- bolstering AI standards-related knowledge, leadership, and coordination among Federal agencies to maximize effectiveness and efficiency;
- promote focused research to advance and accelerate broader exploaration and understanding of how aspects of trustworthiness can be practically incorporated within standards and standards-related tools; and
- support and expand public-private partnerships to develop and use AI standards and related tools to advance reliable, robust, and trustworthy AI.