36 member countries plus six others signed on to the OECD’s AI principles late last month. Described as a global reference point for trustworthy AI, it is noted that other similar guidelines and principles have been the basis for laws in other areas. So, given the lack of much significant legislative activity at the national levels, this may be a good try to get things kickstarted. The principles are pretty straightforward:
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.
- AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed.
- Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
Here are a few observations I had on the complete statement of principles that you can download from the link above:
There is effectively another definition of AI in this document (in addition to the ones I’ve written about.
AI system: An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.
Kind of another vague definition that would take much interpretation to determine whether a system is an AI system.
The first four points noted above come with a number of sub-considerations in the complete statement. Except for point 5. What is meant by accountability is therefore not fleshed out. Does this mean tortious accountability? Criminal accountability? Regulatory accountability? I presume it could include all of these depending on the context. It is also forward looking in terms of accountability as it provides that AI actors must keep up with the “state of the art.” In any case the explicit addition of accountability to the principles is a welcome addition to the thought process around what is needed for the full content of regulations.
Perhaps just as interesting as the principles noted above are the directives as to what governments should try to do. There is just as much in these as in the principles. From promoting transformative policy environments, reviewing current regulatory frameworks, ensuring preparation for transformative technology, acting internationally and globally to reach consistent interoperable standards, and gathering and tracking data to implement the principles, the OECD offers up a broad task list for national governments. One wonders just how many national bureaucracies are actually capable of this let alone ready to try.
Generally speaking I’d say that the document is a pretty good and thoughtful try. For legislatures that have been stalled on this topic, it sets out food for thought if not a straightforward and easy to read framework. And despite the significant roles given to government, it provides a road map for mandarins to follow if they don’t know where to start.
1 thought on “AI, OECD and, sometimes, try.”