The newly formed Global AI Governance Commission apparently has proposed (although I can’t find it on their excellent and highly organized website) that AI decisions must be subject to regulations providing that they are trackable back to a human being.
Quaere how this would work in terms of how this would be documented or implemented in terms of ensuring compliance when considering day-to-day decisions made by artificial intelligence. Would it be sufficient, for example, to lauch a purchasing bot with the intent that it conduct purchases within parameters, or would every purchase have to be monitored. Much to be discussed.
But this also leads to a governance question. Under Canadian law, generally speaking, the directors hold the power and duty of managing the business and affairs of a corporation. Certain delegation powers exist, including the ability to delegate such powers to officers. Reasonable prudence conduct standards govern the exercise of powers by directors. The question therefore becomes, if management of any part of the business and affairs of a corporation were delegated to artificial intelligence, would this hold up under a corporate statute such as those in place in Canada. Is it delegation to remit such powers to artificial intelligence if the director does not understand how the artificial intelligence will make decisions (for example if the director is not familiar with the operation of algorithms or coding). Is it reasonably prudent to delegate in this way? Is it permissible under the delegation powers included in the corporate statues for directors to delegate via an officer who then controls the operation of artificial intelligence which makes some decisions in the management of the business and affairs of a corporation.