This interview considers approaches to AI regulation. There are two philosophical approaches that can be distilled from it:
1. Notwithstanding that we don’t know exactly where we are going, let’s get started. I agree with this very much. Regulations can be repealed, altered and tweaked later (assuming a vigiliant and effective governing body–I know it’s asking a lot). But the point is that we don’t need to wait around much longer before getting started and effectively we’ll be fighting a rearguard action if we do. Let’s go early, often and evolve. I consider this a conservative approach.
2. Great new concepts are at hand that should be addressed. For example, legal capacity for AI as well as extensions of fiduciary duties to AI creators should be considered (the two, by the way, would seem antithetical to me). I very much disagree with this. Attending now to a great leap forward seems likely to lead to a famine of regulation. We’ll end up mired in grandiose concepts that never reach a conclusion and which distract from regulation of the potentially mundane tasks that will initially be taken over by AI. I consider this a revolutionary approach.
While it’s acceptable to consider that AI will be revolutionary, on balance I think that taking a revolutionary approach in regulation would actually not move matters forward. We will get so hung up in philosophical questions that have no answers that nothing effective will get done. The revolutionary approach will be distracting from any immediate task at hand.
Photo by Mahrael Boutros on Pexels.com