I think we’ve all heard the call that everyone needs to be in STEM these days, but here’s an article that makes the case for a role in the social sciences in handling AI. See especially part 3. This line made me think for a bit:
Ethics and values are social phenomena, something people do (with or without machines), rather than abstract concepts that can be coded into AI.
If ethics and values are things that people do, then in fact couldn’t the coding reflect those ethics and values? I mean assuming that people who are coding wish to have their code generally encapsulate the ethics and values they have, wouldn’t they try to do that? Anyway, that’s just a little quibble. I didn’t really disagree that the the social sciences could be involved.
In fact, I did think there is need for at least one other role for the social sciences. As the article rightly points out, there are billions of dollars of AI funding being handed out all over the place. And most of that, as pointed out, is on a national level as every nation declaims itself world leader in AI. And even if there isn’t some billion dollar number being handed out in some nation somewhere, that doesn’t mean that someone isn’t doing so very nice AI coding in that nation. The point being that it isn’t really enough to regulate this in one nation or other, especially since AI crosses borders awfully easily. There’s a hint of where I’m going with this at the end of section 4, in a reference to “regulation through an impartial body”. I think a sixth consideration for a role for the social sciences could be the design of such a regulatory, impartial–and I’ll add effective–body. I’m not really sure that we have an institution out there right now that is ready for the big show.