Why are so few companies ISO42001 certified?
ISO42001 is the key standard for Responsible AI Management, but only 16 AI companies in the world have so far been certified. I led one of those programs, and learned first-hand why it's so hard.
The ISO42001 standard1, a cornerstone of Responsible AI Management, is over a year old. It was published after some two years of development in December 2023 with the participation of hundreds of experts around the world. Yet, of the 70,000+ companies worldwide offering AI solutions2, I can find only 16 who have achieved certification, most recently Anthropic3.
If less than 0.01% of AI companies are presently able to independently demonstrate the suitability of their Responsible AI practices, then we clearly have some real work to do. I led one of those programs – achieving accredited ISO42001 Certification for Amazon Web Services in November 2024 - so I understand first-hand why it is so difficult.
I believe certifications like ISO42001 are important, but not because of the paper certificate or published badge of assurance they offer. I believe in them because of the questions they ask of leaders, because of the mechanisms they force to be built, and because of the resources and responsibilities they demand to be assigned. The final certification is just a milestone, but the process if performed with integrity and diligence, is where real value lies. But it’s not easy. I want to dig into why we are seeing such slow adoption of ISO42001 as demonstrated by published certifications, and try to bring some of my experience and hard-won lessons to help accelerate the journey for all those companies who are striving for certification. No doubt there are many more companies working towards certification right now, and I hope that some of my experiences and guidance may help.
The Current State of ISO42001 Certification
So my desk research uncovered the following companies who have achieved ISO42001 certification (do let me know if you know of others):
2 large cloud providers: AWS and Google
1 AI foundation model provider: Anthropic
4 technology integrators: KPMG, InfoSys, Cognizant, Samsung
9 assorted AI-powered SaaS solutions: 6 Clicks, Meltwater, Integral Ad Science, Datamatics, Evisort, AI Clearing, ORO Labs, Spektr and Thoropass.
Since publishing, I’ve been alerted to a few more organisations who have achieved ISO42001 certification:
Evisort, Kandji, Ema Unlimited, Vanta, Stackaware, Mimecast
Thanks to all who advised of these corrections.
Some 14 months after ISO42001 was released, I am curious why are there so few. What we can do to accelerate adoption of this important standard? In my own experience, I found the journey to ISO42001 certification revealed a collision of worlds: leading-edge, fast-paced AI innovation meeting traditional corporate governance, risk and compliance. My career has been about navigating those complexities, and I wrote previously about those challenges and my observation of the a need for a new profession of AI Assurance - one that integrates aspects of science, engineering, assurance, policy and law here:
But I suspect the story of why it is that only sixteen companies have so far achieved certification to date, and why there are indeed some very notable absences, goes deeper than technical hurdles or skills gaps. From my own personal experience, here are what I consider to be the top issues:
Obtaining and communicating leadership agreement
One of the most striking challenges lies in getting leadership alignment. It is not that leaders are in any way averse to the need for Responsible AI, but getting those senior leaders to agree on Responsible AI policies is like asking a group of ship captains to agree on the perfect weather for sailing – each sees the horizon from their own perspective. Some leaders prioritise innovation speed, others risk management, and still others focus on competitive advantage. Finding consensus on how to responsibly develop and deploy AI requires bridging these diverse viewpoints, often in organisations where AI expertise and governance experience rarely overlap. There are some fundamental tensions to resolve: executive leadership strive to balance innovation speed, risk management, and competitive advantage; science and engineering leaders are stretched to focus on releases, features and breakthroughs; compliance leaders are concerned with risk mitigation and compliance evidence; while senior legal folk are trying to navigate evolving regulations and minimise liability concerns. Achieving consensus on Responsible AI policies across these leaders is really challenging. We often need to conduct extensive cross-functional workshops to develop a shared vocabulary and reconcile differing perspectives.
A central artifact of ISO42001 is a documented Responsible AI Policy that sets out objectives and responsibilities for the organisation. This is where tough questions need to be asked and answered by senior leaders, and it needs to be documented and communicated across the organisation. Writing a one page policy is easy, getting everyone to agree, then communicating it out is hard.
What could possibly go wrong?
Then comes the sobering task of Risk Assessment and System Impact Assessments. Organisations that have spent years celebrating their AI innovations and pushing relentlessly for new features must suddenly put on critical glasses and ask: "What could go wrong?" It's like asking proud parents to list all the ways their child might misbehave – necessary but uncomfortable. Engineering teams accustomed to showcasing AI capabilities now have to systematically imagine potential misuse scenarios, unintended consequences, and failure modes. Getting the balance right between being a cheer-leader for innovation and a critical examiner requires a cultural transformation that many organisations find challenging. And be prepared for some interesting conversations with your legal teams when you try to record those potential catastrophic risks of your new AI system!
An engineer, a scientist, a lawyer and an auditor walk into a bar ...
There is a huge gap in expertise, knowledge and even language between the professions that need to work together on Responsible AI. Scientists can explain new research findings in mechanistic interpretability, but struggle with compliance frameworks, working alongside governance experts who can navigate regulatory mazes but find neural networks mystifying. This knowledge divide isn't just an inconvenience – it's a fundamental barrier to effective AI governance. Organisations need people who can speak both languages, but these bilingual experts are as rare. You’ve truly found a unicorn if you can recruit someone who can confidently engage across the law, policy, audit, engineering and science dimensions of Responsible AI in today's market4.
With limited resources, prioritisation is tough
Resource allocation presents another critical challenge. In the high-stakes race of AI development, asking teams to divert resources to testing and governance can feel like suggesting a pit stop in the final lap of a race. With limited budgets and intense pressure to innovate, organisations struggle to justify investing in governance frameworks that don't directly contribute to their AI capabilities. The reality is that proper testing requires substantial resources – both in terms of infrastructure and talented professionals – at a time when these resources are already stretched thin.
A shortage of skilled, accredited assurance firms
The scarcity of qualified professionals extends beyond organisations walls. Finding auditors and certification bodies who understand both AI technology and governance frameworks is like searching for a chef who's both a master of molecular gastronomy and traditional cuisine. The industry simply hasn't had time to develop enough professionals who can bridge this gap, creating bottlenecks in the certification process. Perhaps sensing the lucrative opportunity, they are busily retraining and developing people, but it takes time and experience. A professional training course on ISO42001 audit takes less than a week - actually having enough expertise and knowledge to perform a useful audit takes much, much longer. The audit firm that performed your latest SOC2 or ISO27001 audit probably does not have the skills to effectively audit for ISO42001 (although they may say they can)
One contributing factor has been the extended delays in the publication of a related standard called ISO42006. ISO42006 sets requirements for certification bodies and it remains in a final approval stage5. This delay has certainly been problematic, but a number of accreditation bodies including ANAB6 (the main one for the US) and others internationally have established interim accreditation schemes to remove the blocker. There are firms who can perform accredited certification and do have the skills to do so effectively.
It’s still early days
Looking forward, I do think there are positive signs though. It is likely that many, many more companies are busily working on implementing their Responsible AI management systems, preparing towards independent audit and certification. As awareness grows, certification bodies get accredited, and regulatory pressures increase, we're likely to see more companies pursuing ISO42001 certification. The pioneers who have achieved certification are showing that while the journey requires some pretty complex navigation, bridging knowledge gaps, and making tough resource decisions, it's very much achievable. But as I said at the beginning of this article, we always have to remember that it’s not the final certification paper that matters, it’s the high-integrity process of asking tough questions and making hard decisions that matters.
I believe the current scarcity of certified companies is a reflection of the profound challenges organisations face in reconciling rapid technological advancement with responsible governance. Yet as AI becomes more deeply embedded in our world, establishing robust governance frameworks isn't just good practice – it's becoming a business imperative. The question isn't whether more companies will pursue certification, but how quickly they can overcome these fundamental challenges to do so effectively.
In coming articles, I plan to go into more detail of the journey of building a Responsible AI Management System that is fit for your needs and conforms to ISO42001 standards, along with practical guidance, templates and resources. I hope you will find the articles useful and I would very much value your feedback and ideas.
https://www.iso.org/standard/81230.html
https://appquipo.com/blog/how-many-ai-companies-are-there/
https://www.anthropic.com/news/anthropic-achieves-iso-42001-certification-for-responsible-ai
https://www.iso.org/standard/44546.html
https://anab.ansi.org/accreditation/iso-iec-42001-artificial-intelligence-management-systems/
Any recommendation on courses for audit training
Good to see attention to the meaning of ISO 42001 certification for AI companies, James! I'm curious:
- What was the organizational scope of the AWS and Google certifications? (I'm assuming it wasn't the entire company in either case)
- What aspects of ethics does the ISO 42001 standard actually cover? For instance, does it cover ethical use of data labeling suppliers or proactive management to identify and mitigate biases?
Thanks for any insights you can share!