4 Comments
User's avatar
R Y I O N P U N's avatar

Well done. Thank you for the reminder… “internalise the principles behind these rules and implement them in agile, practical ways.”

Expand full comment
Pauline Harrison's avatar

ISO/IEC 42001 is the framework standard for AI Governance and includes the harmonisation of ISO 27001 and 9001, plus the knowledge and experience from implementing data governance. It's not New Bureaucracy, it is build on existing standards to work in harmony.

Having been involved with The AI Standards Hub, which is one of the groups involved in drawing up global standards in the absence of legislation (with the exception of EU AI Act), I know that the AI standards coming out (and the ones currently in development) are produced in such a way to limit Bureaucracy and to optomise ethical AI development.

Expand full comment
James Kavanagh's avatar

I understand that’s the intent Pauline but I’ve witnessed multiple times in security, privacy and now in responsible AI how well-meaning intent from standards can devolve into bureaucratic and even irrelevant or checkbox processes if they don’t directly connect and add value during the science & engineering process of building safe systems. It takes deliberate work to translate the intent of a standard like 42005 into something meaningful for engineers - and they’re the ones who ultimately make the decisions about what to build and how to build it. Hence my advice to ignore the bureaucratic approach of the appendix to the standard and focus on its core intent.

Expand full comment
Pauline Harrison's avatar

The core content is absolutely the approach.

The course (it's free) https://www.aiqi.org/42001-course provided a very clear path and understanding and shows how this fits in with security.

AI cannot carry on unchecked. If organisations are not willing to self-manage using global standards then legislation will follow, and we'll see the same scramble for compliance as we did with GDPR.

If an organisation has already undertaken Data Governance compliance and Security Governance then most of the work has already been done. A gap analysis prior to the impact assessmement it saves time and effort.

At the moment it's a choice, and people have free will.

ISO, The AI Standards Hub, AIQI Consortium, etc has a full open forum for people and organiations working to develop and implement AI to join in the working committees. It's how I got involved. Work is wtill being carried out on other standards for AI including the Testing of AI. May be you'd like to join and help set standards?

Expand full comment