Here’s a summary of proposed legislation by a Senator Greenleaf (see what I did there in the title of this post) and Senator Stewart from the Pennsylvania State Senate. It doesn’t seem like the legislation is drafted or publicly available yet.
The gist is to be:
… require all risk algorithms or artificial intelligence programs to meet certain requirements. They must be shown to be free of bias toward any race, gender, or protected class. They must be periodically re-validated and revised in accordance with national best practices. The report of these revisions must be publicly available, along with information about the programs or algorithms and the risk factors they analyze.
Leads me to some questions. On what basis will one determine the lack of bias (outcome, structure of the program or what it is entitled to evaluate)? What national best practices exist? How will information be made public on a useful basis that people can understand? Who will do re-validation?
The point I’m making is not to be critical of the premise of this proposed legislation but that absent a lot of regulatory structure and a holistic approach, it will be difficult to make effective one-off legislation.
Photo by Lisa Fotios on Pexels.com