(Photo by Endlisnis licensed under the Creative Commons Attribution 2.0 Generic license.)
The Israel Innovation Authority (a publicly funded agency under the direction of the Israeli Ministry of the Economy) has a recently released Innovation Report 2018 that contains a chapter entitled The Race for Techonological Leadership. Primarily this chapter considers the importance of AI in maintaining a competive edge in a developing economy.
Identifying 17 countries that already have significant stated and funded policy initiatives under way to support the development of AI, the report considers the importance of keeping up with the Joneses. Government action in supporting AI through education in the appropriate fields and technological training of workers are suggested. These are common themes in these types of reports. Concern that Israel is falling behind is expressed. Privacy concerns are mentioned and the importance of adopting standards such as the EU’s General Data Protection Regulation are mentioned. Curiously though this thought is not particularly developed in that the chapter does not consider whether it would be better to adopt the same standard as the GDPR or a different standard. Such a question would seem to be fairly important in the context of maintaining a foothold in a global AI race.
So far so boring. Particularly so in light of the most interesting comment in the report about AI regulation. Usually these reports also suggest that there will have to be balance in regulation allowing for innovation and economic growth and at the same time protecting [insert something trite about the interests of citizens that shows that the government cares and should be voted for and that sounds the same the world over]. But this report very casually stretches the boundaries of that in considering the risks of AI and says: “To protect the public, should there be a stringent threshold that requires the production of a retrospective account of the algorithm’s decision making process, or would it be better to lower the bar of culpability in order to promote the adoption and development of AI-based innovation with all its benefits?”
This is a new thought that I have not seen in these types of reports. First off, the protection of the public is not asidously ensconced in transparency in AI or specific regulation which is a break from the typical genuflection. This suggests protection of the public, presumably through the benefits both economic and practical of AI, could be promoted by not curtailing AI through regulation. Losses and damages caused by faulty AI, could in the long run, result in greater benefits and therefore should be encouraged. A lowered bar of culpability suggests lessening liability. Many reports and government proposals talk about regulatory sandboxes to promote experimentation. This moves the consideration to kicking the regulators out of the sandbox.
Bold. I for one ברוך הבא our new Grey Goo overlords.
1 thought on “Is a lowered bar more innovative?”