Here’s the draft AI in Government Act from the US Senate. It proposes the development of a policy lab to ensure that AI (never defined, by the way, as if we all just know what that means) as used by the feds is in the best interest of the public. It asks that the lab bring together individuals to keep current on technology, to advise the feds on acquisition of technology, to study policy, legal, and ethical challenges and implications relating to AI. Annual reports to Congress from the lab are required. An advisory board will lead up the lab with membership from the Secretary of Commerce, The Office of Science and Technology Policy, The Office of Management and Budget, The Department of Commerce and The Administration.
There’s nothing particularly objectionable in this and I think a lot of the focus is to be on how government uses AI. But I also feel like a lot of this ground has been covered. Is it really going to be hard to find current considerations of ethical challenges in AI? I feel like you might not need a whole infrastructure to see what concerns people are surfacing in that area.
Largely what I think is that there is a lot of studying, consultation and activity going on already and although it’s fine to make sure that governments are up to date on what is going on, I suspect a lot of government agencies and quasi-government agences already have this underway. It would be nicer to see some actual legislative activity than another years wait for a report. Run slowly, robots.
Photo by Matan Segev on Pexels.com