What's the cost of building high-integrity AI governance?
Last time we looked at benefits - now we quantify the investments we need to build and sustain a high integrity AI Management System. (Part 2 of a 3-part series on the business case for AI governance)
In my previous article1, I laid out the compelling business case for high-assurance AI governance — from accelerated development cycles and enhanced reliability to measurable competitive advantage. But understanding the potential value is only half the equation. The crucial question that I expect any business leader would ask next is about investment: "What will this actually cost us?"
The answer depends heavily on context. Just as you wouldn't expect a startup developing its first AI application to implement the same security measures as a global bank managing millions of financial transactions, governance investments have to scale appropriately to both organisational capacity and their risk profile. The core principles remain constant, but the implementation varies dramatically based on what you're building and the potential consequences of failure.
In this article, I'll break down the specific investments required for effective AI governance, showing how to scale them appropriately across different organisational contexts. Whether you're a startup watching every dollar or a global enterprise managing hundreds of AI powered systems, I hope you'll find practical guidance on building governance systems that protect your stakeholders without overwhelming your resources. There’s not a lot of published data or research in this area, so I’ll give you estimates based on my own practical experience and would welcome your feedback on the data you’ve observed.
Matching governance investment to impact
I frame the governance intensity needed in my mind as being dependent on two key factors: (1) the scope of your AI operation; and (2) the potential impact of your systems. A small company developing AI for medical diagnostics may need a whole lot more robust governance than a larger organisation building low-risk productivity tools. That reality shapes how you can estimate the likely investments needed in governance.
For simplicity, I’m going to use three archetypal scenarios. A small AI provider with fewer than 20 people might be developing sophisticated algorithms for financial trading. Despite their size, the impact of their systems demands rigorous oversight. They'll need focused but comprehensive governance that emphasises their specific use case while remaining manageable for a lean team. Governance can’t get in the way, it has to be streamlined, and accelerate their innovation.
A medium-sized organisation of 20-200 people might be developing multiple AI products across different domains. They face the challenge of implementing consistent governance across diverse teams while still maintaining the agility to innovate. Their governance needs to scale horizontally across projects while remaining efficient.
Large enterprises with over 200 people working on AI manage complex ecosystems of interdependent systems. Their governance mechanisms have to handle this complexity while ensuring consistency across global teams, dealing with regulatory requirements for products launched in multiple markets. They need quite sophisticated systems that can oversee hundreds of models, but they also have the advantage of corporate infrastructure and expertise on hand.
So let’s start with the investments needed to setup your AI Management System in the first place.
Starting with the human investments
The most crucial investment in AI governance lies in assembling the right team. While technology enables governance and frameworks guide it, people drive the actual work of ensuring AI systems remain trustworthy and safe. This human element will scale across organisation sizes, and automation will scale the impact of these people, but a small, skilled, capable team will always remain the cornerstone of effective governance.
In small organisations, expertise needs to be concentrated. Rather than having dedicated governance roles, these organisations often develop governance capabilities within their existing product development teams. A technical leader might spend mornings reviewing model performance and afternoons guiding development decisions, while engineers receive focused training in validation and risk assessment. External experts can provide specialised reviews around regulatory compliance, ethics and safety.
When budgeting for the human element of AI governance, organisations need to acknowledge both direct and indirect costs. For small organisations, the investment often represents something like 5% of technical staff time allocated to governance activities (possibly up to 10% if the use case is higher risk) plus periodic external consultation. The key is recognising that while these resources aren't dedicated solely to governance, they represent real capacity that must be planned for.
As organisations grow to medium size, specialised roles become essential. A dedicated AI Governance Lead bridges technical and compliance requirements, supported by specialists in risk assessment and validation. These organisations typically establish part-time safety committees drawing from existing staff, supplemented as necessary by external consultants for specialised expertise. This first AI governance lead needs to be very hands-on and multi-disciplinary - having the confidence and ability to work with engineers, scientist, lawyers and policymakers.
Medium-sized organizations typically need to allocate 1 or 2 full-time equivalent positions to governance activities, though these might be spread across more individuals. On top of this, they should also budget for regular external expertise and training programs that typically consume 2-3% of their overall AI development budget.
Large enterprises require sophisticated governance teams that can handle complexity while maintaining consistency. Multiple governance leads oversee different lines of business, domains or regions, supported by dedicated specialists in validation, risk assessment, and documentation. They maintain permanent safety committees with external members and often have specialised training staff to ensure governance knowledge spreads effectively throughout the organisation.
Large enterprises often find that effective governance requires a dedicated team representing up to 5% of their AI workforce, plus substantial investment in training and external expertise. While these numbers might seem significant, they reflect the complexity of managing AI systems at scale and the critical importance of getting governance right. Some companies like Anthropic even make safety and governance core to the work of every employee2.
Technology infrastructure to scale
An AI governance system needs foundational technology and automation to operate effectively. This infrastructure lets you have consistent oversight, efficient operations, and an ability to detect and respond to issues before they become critical problems. People need tools to be most effective, although not necessarily complex tools.
Small organisations have to focus on essential capabilities that integrate seamlessly into development workflows. Rather than investing in expensive enterprise platforms, they typically build their governance infrastructure around open-source solutions and, if necessary, a minimal number of commercial tools. The key is selecting technology that provides useful, necessary oversight without creating unnecessary complexity. A small team might use basic monitoring tools integrated with their development environment, version control systems for model and data tracking, and simple but effective documentation platforms.
For small organisations, technology costs typically represent only a very small portion of their AI development budget, perhaps 1-2%. They might leverage open source, cloud platforms and inexpensive SaaS solutions to keep the technology costs low. The focus should be on tools that directly support critical governance activities rather than comprehensive platforms that would be problematic overkill for this stage. There are multiple software-as-a-service platforms that can help with bootstrapping the program – just don’t fall into a trap of thinking that culture change happens with the final click of an installation wizard.
Medium-sized organisations face the challenge of managing multiple AI systems while maintaining consistent oversight. At this scale, commercial monitoring platforms become more advisable, as the complexity of tracking multiple models and datasets exceeds what simple tools can handle effectively. These organisations typically need dedicated testing environments, more sophisticated documentation systems, and integrated platforms that can track governance, risk and compliance activities across teams. They may begin to invest in specialised tools for bias detection, drift monitoring, and performance analytics.
Medium-sized organizations usually find technology investments for AI governance consuming a more substantial portion of their AI budget, perhaps 2-3%. This includes commercial platforms for governance, monitoring and validation, dedicated testing environments, and possibly even some specialised tools for risk assessment and tracking. Organisations at this scale often need to balance the benefits of integrated platforms against the flexibility of point solutions, but they’re unlikely to have spare capacity to fully build and sustain their own tooling platforms for governance.
Large enterprises require comprehensive technology stacks that can handle hundreds or thousands of AI models simultaneously. Their infrastructure needs to support complex workflows across multiple teams while maintaining consistent governance standards. They may invest in enterprise-wide platforms that integrate monitoring, documentation, and workflow management or more likely build their own tools tailored to their specific needs. They generally require more sophisticated security controls, automated testing environments, and advanced analytics capabilities to track governance metrics across their AI portfolio. They may also have the capacity to make use of sophisticated open-source resources like the Secure Controls Framework.3
Large enterprises probably allocate a more substantial 2-5% of their AI budget to governance technology, reflecting the complexity of managing AI at scale. This includes enterprise-wide platforms, advanced testing environments, and the development of tools fit for their specific purpose. While significant, this investment can lead to efficiency gains that help justify the cost.
Governance Framework: The Operating System for your Safety Culture
Think of governance frameworks as the operating system that coordinates all these moving parts. They define how people use technology to achieve effective oversight, ensuring consistency and reliability across the organisation. While frameworks might seem like just documents, they represent the codified wisdom and processes that make governance work in practice. ISO42001 is not a governance framework - it’s a standard that describes some parts of the overall governance framework, but I’ll come back to that distinction in another article. You will build your own governance framework, adapted to your context, scale, location, and culture. This requires upfront investment of resources, possibly bringing in specialist contractors or engaging an external firm so I factor this into a cost model separate from the ongoing human resources for governance.
Small organisations need a focused framework that concentrates on their specific use cases. Rather than trying to cover every possibility, they should develop clear processes for their most critical activities - model validation, risk assessment, and incident response. These frameworks should be lightweight but thorough, designed to scale as the organisation grows. The key is creating clear guidelines that everyone can follow without getting lost in unnecessary complexity. Some software tools come with pre-built frameworks which can be a useful starting point – but be careful not to overwhelm, and make sure the framework is trimmed and tailored to your needs. A spreadsheet everyone understands is far more valuable than a beautiful app that nobody uses.
For these small organisations, the primary cost of a framework comes in time rather than direct expenses. Typically, organisations should expect to invest a month of senior technical time in developing the initial framework, with ongoing maintenance requiring about 5% of capacity. External consultation during framework development will be additional - as it will still require a deep level of engagement from the team.
Medium-sized organisations need a framework that can handle diversity of business functions and systems while maintaining consistency. Their framework needs to establish standard, pragmatic practices that ensure quality without stifling innovation. This often involves developing detailed documentation for different types of AI systems and use cases, creating clear decision-making processes, and establishing regular review cycles to keep frameworks current. They may find that they need to dedicate 1 full-time equivalent to framework development for the first 3 months, dropping to 0.5-1 FTE for ongoing operations. They should also budget for regular external reviews and updates that will depend on the market, risks and regulatory compliance requirements they face.
Large enterprises require sophisticated frameworks that can handle complex interactions between multiple teams and systems. Their frameworks need to integrate with existing enterprise processes while addressing AI-specific requirements. This often involves creating specialised procedures for different types of AI systems, establishing clear governance hierarchies, and maintaining consistency across global operations. Their framework development and maintenance needs a dedicated team of 2-3 people for up to 6 months, plus significant input from across the organisation.
The key insight across all these investments is that effective governance isn't about spending the most money - it's about making targeted investments that match your organisation's needs and risk profile. Learning from the experiences of others, sharing with peers and taking advantage of available resources and templates can really cut down on the effort.
The cost of getting ISO 42001 certified
While building your AI Management System requires significant internal investment in infrastructure, training, and expertise, achieving certification demands its own distinct financial commitment. I do believe it is a worthwhile step for most organisations though, considering both for the additional validation and provides, as well as the benefit of the confidence it can provide to customers and stakeholders. The certification journey unfolds in two phases: preparation and validation. Each phase brings its own set of costs as your organisation demonstrates the effectiveness of your governance framework to independent assessors.
Phase One: Proving Your Readiness
Before inviting external auditors to the main event, you'll need a thorough dress rehearsal. This preparation phase hinges on two critical elements: a comprehensive gap assessment and a robust evidence portfolio.
The scope and scale of this preparation of course varies with organisational size. For a small startup, the journey might begin with a senior technical lead spending 2 to 4 focused weeks examining your governance through the lens of ISO 42001. They'll need to formalise what may have been informal practices, creating clear records of model validation, risk assessments, and key decisions. It’s reasonable to think this would demand up to a month of dedicated effort, with the technical team devoting perhaps up to a quarter of their time to documentation and process refinement, depending on the rigour of their existing approach to documentation.
Mid-sized organisations face a different challenge. Here, the preparation often requires assembling a dedicated readiness team—typically a couple of governance specialists partnered with technical leads who understand different aspects of the AI systems. This team might spend up to two months conducting their review, working closely with stakeholders across departments to map out how governance functions in practice. Two or three full-time staff lead the charge while the broader AI team contributes possibly up to 10% of their capacity to support documentation efforts.
For large enterprises, the scale shifts again. The complexity of managing AI operations across different teams and regions demands a more substantial approach. A dedicated project team, supported by representatives from various departments, might spend two to four months ensuring consistent governance practices across the organisation. A fair amount of that time might need to go into leadership briefings, developing new cross-organisational policies and mechanisms.
Regardless of size, the goal remains the same: building a comprehensive evidence base that demonstrates how your organisation meets ISO 42001's requirements. This isn't merely about creating documents though—it's about capturing tangible proof of governance in action and resolving gaps or deficiencies that you know about. Your evidence needs to tell the story of how your organisation approaches AI development, deployment, and management with rigour and responsibility.
Phase Two: The Certification Journey
Even organisations with sophisticated AI governance frameworks benefit from external scrutiny. The formal certification assessment often reveals nuanced gaps that internal teams might overlook, as certification bodies bring both fresh perspective and expertise from evaluating AI management systems across different contexts and industries. The systematic nature of their review process helps identify subtle improvements that can strengthen your governance framework, making sure it meets the comprehensive requirements of ISO 42001.
Your first critical decision is selecting an accredited certification body. You need assessors who understand not just general management systems, but the nuanced complexities of AI governance. Small organisations might spend a week finding the right partner, while large enterprises often invest months in formal procurement processes, particularly when negotiating multi-site agreements. It is worth finding the right one - don’t be tempted to go with the cheapest or the ‘easiest’ (although also don’t be impressed by the glossy expense of a huge consulting and audit firm). Be warned that there is a huge variety in the quality of resources available and an enormous gap between what some certification bodies claim about their expertise and their real capability.
They should be accredited or at least able to describe a plan for accreditation that you have confidence in, but don’t take accreditation as an automatic seal of quality. If in doubt - spend an hour talking about your AI system with them in depth - you’ll find out their true level of expertise in the first 5 minutes.
The assessment itself unfolds in two stages. Stage 1 is like having an architect review your blueprints—the auditor examines your policies, procedures, and documentation to ensure they align with ISO 42001's requirements. Stage 2 delves deeper, observing how your governance actually works in practice. They're looking for evidence that your governance isn't just well-designed on paper, but functions effectively in the real world.
A small startup might complete both stages in three or four focused days, with their technical lead guiding auditors through their governance practices. Mid-sized organisations typically need up to two weeks, coordinating across multiple teams to demonstrate consistent practices. Large enterprises face the most complex challenge, often requiring several weeks to showcase governance across diverse AI applications and multiple locations, and sourcing evidence or expertise to answer questions that arise. Be aware that there’s usually an interval between Stage 1 and Stage 2 to remediate gaps or gather more evidence. You’ll need to plan for a few weeks.
Findings are inevitable from both stages—even the most prepared organisations discover areas for improvement. No findings are a red flag - a burning, bright red flag! These can range from simple documentation updates to more substantial process refinements.
They will fall into distinct categories of severity. At the most benign level are Opportunities for Improvement (OFIs) - these are suggestions where the auditor sees potential to enhance your governance system, though you're already meeting the standard's requirements. Next are Minor Nonconformities, which indicate gaps in your system that need addressing but don't represent fundamental failures - perhaps inconsistent documentation or isolated instances where procedures weren't followed. Major Nonconformities are more serious, pointing to systemic issues that could compromise your governance effectiveness - like missing mandatory processes, widespread failures to follow procedures, or critical gaps in risk assessment. These have to be resolved before certification can be granted. The most severe category is Critical Nonconformities, which indicate a fundamental failure that poses immediate risks - such as deploying high-risk AI systems without any governance controls or deliberately circumventing safeguards. These definitely prevent certification, and if they’re found during a surveillance audit, could trigger a suspension of your certification.
Small organisations might receive maybe five to ten findings, typically requiring a few weeks to address. Mid-sized organisations often face ten or more findings, demanding a month of coordinated effort across teams. Large enterprises might need to address a much longer list of findings, potentially investing a few months to implement changes systematically across their operations. The total journey varies accordingly. Small organisations might achieve certification in one month from their initial engagement with auditors. Mid-sized organisations typically need one to two months, while large enterprises will likely require at least three months, particularly when coordinating across multiple sites. Much of this is of course scope-dependent and it’s possible to manage the scope of the audit independently of the scope of your governance program in order to manage costs and timelines.
But here's the crucial point that I will keep returning to: certification isn't merely about passing an audit—it's about proving to yourself that your governance system truly works and is fit-for-purpose. The investment in getting this right yields lasting benefits: more robust operations, reduced risks, and enhanced stakeholder confidence. Findings are useful in an organisation that is willing to learn. Each finding addressed strengthens your governance framework, creating value that extends far beyond the certificate on your wall, or the logo on your website.
Sustaining your AI management system
Reaching ISO 42001 certification is a remarkable achievement - worth celebrating - but it is only part of the journey. In the words of Jeb Bartlett, the greatest American president that never was: “What’s next?” The challenge now is maintaining this position of good governance. So, in our business case, we need to reflect the ongoing cost of sustaining the governance framework, along with the resources, systems and mechanisms you’ve put in place. It has to continue to be adopted for new AI systems and evolve with emerging risks and scale of your organisation. This demands attention across three crucial dimensions: vigilant monitoring, capability development, and adaptation.
For small organisations, monitoring often resembles a collective responsibility with one person on point at a time. You might assign one engineer spending each Thursday reviewing governance metrics. They rely on automated alerts for critical issues, allowing them to focus their limited attention where it matters most— all-up typically investing about 10-15% of their technical capacity in these oversight activities. They will also invest possibly up to 5% of their time in governance training, supplemented by relationships with external experts who provide specialised guidance. You’ll need to assign a responsibility to that team member (or another) for tracking changes in their domain, such as new regulations, threats or compliance requirements, again perhaps 5-10% of their time.
In a mid-sized organisation, you might find two or three specialists dividing their attention: one tracking technical performance metrics, another monitoring risk indicators, and a third ensuring compliance standards remain strong. Each might spend 25%-50% of their capacity on these tasks. They will develop more structured curricula and maintain regular knowledge-sharing sessions. They blend internal expertise with external partnerships, and invest a small amount of budget in keeping their governance skills sharp and current. Mid-sized organisations tend to approach change more systematically, maintain governance committees that meet monthly to survey the horizon and test new approaches through careful pilot programs.
Large enterprises operate with dedicated teams and use advanced analytics to spot patterns—identifying potential issues before they develop into storms. These organisations maintain continuous oversight, supported by quarterly assessments across their entire AI portfolio. They maintain deep partnerships with multiple external experts, investing 3-5% of their AI budget in capability development across multiple specialties. The dedicated team watches for emerging risks and opportunities across multiple domains. They operate sophisticated frameworks tailored to different AI systems, each evolving at its own pace while maintaining alignment with enterprise-wide standards. However, they have to strongly guard against becoming bureaucratic, always taking deliberate actions to stay close to the business and engineering teams. I’ve seen wonderful governance programs that are tightly aligned to business objectives and delivering tangible value, devolve into bureaucratic morass as their team size grows but they lose connection to the business.
This sustained investment in governance isn't merely an insurance policy—it's the foundation that enables confident innovation. Think of it as building institutional muscle memory for responsible AI development. With each challenge faced and overcome, your governance system grows stronger and more resilient. The key lies in finding the right balance for your organisation's context. Rather than pursuing perfect governance, focus on building systems that effectively manage your specific risks while enabling your AI ambitions. When done right, good governance feels less like a restraint and more like an enabler, helping you cross challenging terrain with confidence and speed.
By maintaining this perspective and making thoughtful investments in monitoring, capability development, and adaptation, organisations can build governance systems that don't just endure but actually improve over time.
Now while understanding both the benefits and the costs is essential, transforming these into organisational action requires more than just solid numbers—it demands compelling storytelling and strategic conversations. Even the most rigorous cost analysis won't drive change unless it resonates with the right people at the right time. So in my last article of this series, I'll explore how to craft that narrative that speaks to diverse stakeholders, from technical teams concerned about development speed, to legal teams concerned about liability, to executives focused on market positioning.
I’ll go through how I approach building momentum for governance initiatives by weaving together tangible costs with strategic value, using pilot projects to demonstrate impact, and adapting the message to address specific stakeholder concerns so that the aspiration for good governance turns into a sustainable reality.
Please do subscribe if you’d like to be notified when the next article is out, or read any of my other articles on implementing an AI management system for real.
https://www.forbes.com/sites/timabansal/2024/01/16/openai-or-anthropic-which-will-keep-you-more-safe/
https://securecontrolsframework.com/
Great article! It would be interesting to expand on geographical differences—for example, Europe may pose higher compliance costs due to the obligations of the EU AI Act.