ISO42001 is the key standard for Responsible AI Management, but only 16 AI companies in the world have so far been certified. I led one of those programs, and learned first-hand why it's so hard.
So I personally haven't done an AI audit course - a long time ago I did ISACA CISA and other audit courses. The one that I've heard more about (and I don't have any first-hand experience with) is the BABL course - their curriculum looks good and you can know it's coming from actual practitioners. I know it's not an audit course (more AI governance and AI technical safety) but I have to plug the guys at https://aisafetyfundamentals.com/ Those courses I have joined and they're terrific (and free)
I participated in both the BlueDot AI Safety Fundamentals course and the AI Auditor Certificate Program by BABL AI (which opened the door for me to become an AI auditor and which I strongly recommend). So, I’m happy to answer any questions you may have, @Nicole Jahn.
Good to see attention to the meaning of ISO 42001 certification for AI companies, James! I'm curious:
- What was the organizational scope of the AWS and Google certifications? (I'm assuming it wasn't the entire company in either case)
- What aspects of ethics does the ISO 42001 standard actually cover? For instance, does it cover ethical use of data labeling suppliers or proactive management to identify and mitigate biases?
Great read and insight. What I find lacking most is the actual impact on engineering practice and the requirements to AI systems. After all, responsible AI is a matter of doing the right things in the right way and avoiding the wrong things and wrong practices. I see companies focus too much on 'compliance' by ticking off checkboxes and filing comments in huge multi-sheet Excel sheets, while the actual engineering practice is hardly ever changed or even discussed. This creates a weird dystopian simulation of compliance, where all these Excel sheets create some representation of the actual system and engineering to fit the expectation of whatever it is that it should comply with. I am exaggerating a bit, of course. In my opinion, it must be done the opposite way: change engineering practices and focus on outcomes first, then collect evidence to demonstrate compliance.
Couldn't agree more Patrick. So many times, I've seen checkbox compliance that is disconnected from engineering practice and over time they diverge so far that there is a theatre of compliance that is fabricated and artificial. It's sad that the dystopia you describe is far too common: auditors and internal compliance people poring over spreadsheets and narratives that bear no real relation to the actual systems built.
I use three terms: high-integrity assurance, checkbox assurance and malicious compliance to describe three different mindsets around assurance - only one of which is useful. The big difference of high-integrity assurance is that it's done with a shared goal of safer outcomes, trust exists between engineers and compliance teams - they work with the same documents and the same tools. I've only ever seen it in organisations with strongly aligned leadership and teams who actually understand each others domains enough to collaborate effectively.
Any recommendation on courses for audit training
So I personally haven't done an AI audit course - a long time ago I did ISACA CISA and other audit courses. The one that I've heard more about (and I don't have any first-hand experience with) is the BABL course - their curriculum looks good and you can know it's coming from actual practitioners. I know it's not an audit course (more AI governance and AI technical safety) but I have to plug the guys at https://aisafetyfundamentals.com/ Those courses I have joined and they're terrific (and free)
I participated in both the BlueDot AI Safety Fundamentals course and the AI Auditor Certificate Program by BABL AI (which opened the door for me to become an AI auditor and which I strongly recommend). So, I’m happy to answer any questions you may have, @Nicole Jahn.
Great thanks James
Good to see attention to the meaning of ISO 42001 certification for AI companies, James! I'm curious:
- What was the organizational scope of the AWS and Google certifications? (I'm assuming it wasn't the entire company in either case)
- What aspects of ethics does the ISO 42001 standard actually cover? For instance, does it cover ethical use of data labeling suppliers or proactive management to identify and mitigate biases?
Thanks for any insights you can share!
Great read and insight. What I find lacking most is the actual impact on engineering practice and the requirements to AI systems. After all, responsible AI is a matter of doing the right things in the right way and avoiding the wrong things and wrong practices. I see companies focus too much on 'compliance' by ticking off checkboxes and filing comments in huge multi-sheet Excel sheets, while the actual engineering practice is hardly ever changed or even discussed. This creates a weird dystopian simulation of compliance, where all these Excel sheets create some representation of the actual system and engineering to fit the expectation of whatever it is that it should comply with. I am exaggerating a bit, of course. In my opinion, it must be done the opposite way: change engineering practices and focus on outcomes first, then collect evidence to demonstrate compliance.
Couldn't agree more Patrick. So many times, I've seen checkbox compliance that is disconnected from engineering practice and over time they diverge so far that there is a theatre of compliance that is fabricated and artificial. It's sad that the dystopia you describe is far too common: auditors and internal compliance people poring over spreadsheets and narratives that bear no real relation to the actual systems built.
I use three terms: high-integrity assurance, checkbox assurance and malicious compliance to describe three different mindsets around assurance - only one of which is useful. The big difference of high-integrity assurance is that it's done with a shared goal of safer outcomes, trust exists between engineers and compliance teams - they work with the same documents and the same tools. I've only ever seen it in organisations with strongly aligned leadership and teams who actually understand each others domains enough to collaborate effectively.
I'm writing a little more on this now and will publish another article shortly that explores your point. (I also wrote a bit about it on LinkedIn before: https://www.linkedin.com/pulse/shift-left-mindset-ai-safety-james-kavanagh-ony5c/)
Thanks for reading and your feedback. I'm very much enjoying the writing and hearing the perspectives of others. I really appreciate it.