How to secure leadership support for high integrity AI governance
Beyond benefits and costs, securing leadership support and investment is about building momentum through compelling storytelling.
In the previous two articles1 2, I explored how a well-designed AI management system delivers value far beyond simple regulatory compliance. From protecting against costly failures like those seen at GM Cruise, to enabling faster and more confident innovation by establishing clear guardrails, the business case combines tangible risk reduction with strategic advantage. I went through my perspective on how early investment in governance frameworks consistently costs less than retrofitting them after incidents occur, while building the trust necessary to accelerate AI adoption across an organisation.
Yet even with compelling economics and clear strategic benefits, securing genuine leadership commitment for AI governance remains challenging. The reality is that implementing an ISO 42001-compliant management system (alongside governance for security, privacy and other domains) requires significant organisational change - from how teams develop AI systems to how they think about risk and responsibility. This kind of transformation demands more than just a well-documented business case. You have to build deep understanding and lasting support across diverse stakeholders who, let’s face it, may already have an opinion that governance is more of a constraint than an enabler.
In this article I want to explore how to go about building that crucial leadership support - how to get past financial metrics to create genuine organisational momentum for responsible AI governance. I’ve developed and delivered many business cases for technology and governance investments, but I’ve never found that even a perfect document with a slam-dunk financial case is enough on its own. So, I want to share some of my own experiences that have worked, walking through some ways to engage different stakeholders, overcoming their concerns, and demonstrating how systematic governance accelerates rather than inhibits innovation. Most importantly, I want to help with how to shift the narrative from governance as a necessary burden to governance as a strategic capability that transforms how organisations develop and deploy AI.
With funding, you can deliver checkbox compliance, but with sustained, motivated leadership support, you can deliver high-integrity assurance.
Four pillars: Strategy, Culture, Innovation and Risk
The strategic justification for AI governance is going to rest on these four key pillars, and every business leader will understand your message if you frame the case in one or more of these:
Strategic Positioning. Organisations that build robust governance early gain the ability to move faster as AI opportunities and regulations evolve. Rather than retroactively addressing governance gaps, they can rapidly demonstrate trustworthiness to stakeholders and regulators. AI providers in sensitive sectors like healthcare and finance implement systematic risk assessment and model validation processes, so that they can enter new markets in weeks rather than months by quickly showing customers and regulators their controls and understanding. Communicated in this way, forward looking business leaders get why governance should be systematic from the start - documenting your processes, establishing clear risk frameworks, create repeatable validation procedures all build trust with customers.
Cultural Transformation. Effective AI governance changes how teams approach development and deployment. In organisations with mature governance, I've seen engineers naturally incorporate security, resilience and responsibility considerations into technical specifications, while product managers proactively address governance requirements in roadmaps rather than treating them as compliance checkboxes. This shift doesn’t happen by accident - it requires deliberate effort by leaders - but if you establish regular cross-functional reviews where technical and governance teams collaborate, create clear processes for evaluating ethical implications, and reward teams that demonstrate thorough consideration of governance in their planning, magic happens. The easiest time to have governance discussions is early in the development cycle when changes are easier to implement, and rework can be avoided.
Accelerated Innovation. I get why some leaders think governance, risk and compliance can be a hand-break on innovation. They’ve experienced the bureaucracy of checkbox compliance and inane regulations - but that’s because their experience has been one governance done poorly. I strongly believe that well-designed governance makes innovation faster by establishing clear boundaries for experimentation. We have to pause and hear how engineering teams do worry that governance will slow their AI initiatives, but then they appreciate how clear processes for evaluating and managing risks can mean their team spends less time debating what is acceptable and more time innovating within established parameters. The key is designing governance frameworks that provide explicit guidance while maintaining flexibility for novel approaches. Document your risk appetite clearly, create efficient processes for evaluating new use cases, and ensure governance teams are equipped to provide rapid feedback on innovative proposals. Business leaders want faster innovation, but they don’t want chaos or wasted effort - good governance helps.
Risk Management The cost of delayed governance grows exponentially as AI systems become more complex and interconnected. When a company discovers bias in their recommendation engine months after deployment, the cost of remediation - including system redesign, retraining, and rebuilding customer trust - far exceeds what proactive monitoring would have required. Establish comprehensive monitoring from the start, covering not just security and technical performance but also fairness, transparency, and other ethical considerations. Create clear incident response procedures and regularly test them through simulated scenarios. Most importantly, ensure your governance system can identify emerging risks before they manifest in production. New regulations emerge rapidly, and it’s best to be prepared with strong governance foundations rather than react to each and every new rule. Business leaders never welcome unpleasant surprises that could have been foreseen.
This is all about forcefully rejecting the notion of governance as a compliance exercise. I try hard to never use that word with leaders (instead, I talk about assurance). Good AI governance is a strategic capability that enables confident innovation. So, explain how your approach will create clear ownership and accountability for each aspect of governance, how you’ll invest in tools and training that make good governance practices accessible to all teams. Most importantly, demonstrate through consistent action that governance considerations are central to the organisation’s AI strategy, not peripheral to it.
Steps to make the case for AI governance
Let me share what I've learned about the steps to take building support for good governance in general, and good AI governance in particular. The key, I've discovered, is that you need to tell different stories to different people - not because you're changing the truth, but because different stakeholders care deeply about different aspects of governance.
When I work with engineering leaders, I focus on how governance accelerates rather than constrains their teams. I'll start by asking about their current pain points - the time spent debating what's acceptable, the rework when issues are caught late, the uncertainty about whether a model is ready for production. It’s not uncommon to find 20% of their time is spent on these kinds of informal governance activities. By implementing clear frameworks and decision processes, we cut that overhead in half while actually improving safety and compliance.
Business leaders need a different conversation entirely. With business leaders, I focus on market access and competitive advantage - they are often surprised, expecting I would focus on risk and compliance, but instead ask about profitability, market share, sales blockers, and competitors. Previously, I worked with one team, deeply skeptical of governance until we mapped out how competitors with more robust governance frameworks and resources to build customer trust were gaining market share despite having less-featured products. The difference can be stark - without good governance resources, teams scramble to demonstrate trustworthiness to new clients, while competitors close deals in half the time because they could quickly show their systems were responsibly managed, fully assured relative to local requirements.
Legal, risk and compliance officers might seem like natural allies, but I've learned they need their own unique approach. You may find they can be the most conservative members of the leadership team, with established ways of working that they may not be all that keen to change. Personally, I never start with the legal, risk and compliance leaders - I look for business and engineering leadership buy-in first, then circle back to the ‘natural’ allies in legal, risk and compliance.
Let me share a practical approach: Start small, with informal conversations individually across these different groups. Listen more than you talk. Understand their specific challenges and frustrations. Then begin building your narrative around how governance specifically addresses their pain points. Don’t bring everyone together until you feel you’ve got a groundswell of support and a good understanding of every leader’s perspective.
I often recommend starting with a pilot project - something meaningful but manageable. For instance, starting with a gap assessment on a single application or service. When performing the pilot, document everything - the effort required, the benefits gained, the lessons learned. When it comes time to propose expanding governance across all of the organisation’s AI systems, you will have the concrete evidence of value that resonated with each stakeholder group.
You'll face resistance - everyone does. When people tell me they're worried about governance slowing them down, I share specific examples of how clear frameworks actually speed up development. When they raise concerns about cost, I walk them through real cases where early investment in governance saved millions in retrofit costs or crisis management. Here's what I've found most crucial: Don't treat securing initial approval as the end goal. You need to build lasting support. I always recommend creating a governance steering committee with representatives from different parts of the organisation. This keeps key stakeholders engaged and ensures diverse perspectives inform your governance evolution.
Remember, building support for governance is a journey, not a destination. Start with the stakeholders most likely to understand its value, build evidence through pilot projects, and gradually expand your coalition. I've seen this approach work, transforming governance from a compliance burden into a genuine organisational capability that enables faster, more confident AI development.
But whatever you do - don’t make these 5 mistakes!
I want to finish by sharing some common missteps that can happen when implementing AI governance - patterns that can undermine success even with strong initial commitment. These anti-patterns often emerge from good intentions but can cost you the support and investments you need to be successful in the long run.
1️⃣ The first and perhaps most dangerous pattern is treating a milestone like ISO 42001 certification as the end goal rather than just a milestone in building genuine governance capability. Focusing exclusively on "checking the boxes" for certification, creating elaborate documentation that bears little resemblance to how teams actually work is a terrible strategy. When faced with a real crisis - the governance framework will prove hollow and useless. Having paperwork but not the practiced capability to respond effectively is a tragedy. The certification should validate your governance practices, not define them.
2️⃣Another common failure mode is building the case for governance solely around risk mitigation. While protecting against downside scenarios is important, framing governance as purely defensive misses its transformative potential. I have seen many companies and government agencies struggle to gain traction with their governance initiatives until they reframe it around enabling faster, more confident innovation. Teams need to begin seeing governance not as a barrier but as guardrails that let them move faster within clear boundaries. Build your case on both protection and possibility.
3️⃣Perhaps the most unfortunate pattern is deliberately underestimating the investment required, hoping to secure more resources once initial work begins - perhaps proposing a skeleton governance framework, planning to expand it later. This almost always backfires - under-resourced teams can't deliver meaningful value, which just makes securing additional investment even harder. The resulting patchwork of partial solutions proves more expensive and less effective than doing it right from the start. Be honest about what good governance requires. You can start small, but don’t promise what you can’t deliver with the available resources.
4️⃣I've also seen organisations focus too narrowly on technical controls while neglecting the human elements of governance. Strong policies and monitoring tools matter, but they mean little without trained people who understand how to use them effectively. Investing heavily in automated monitoring systems but failing to build the organisational capability to interpret and act on the alerts is pointless. When problems start to surface, the warnings will be lost in a flood of unprocessed notifications.
5️⃣Equally problematic is treating governance as the responsibility of a single team rather than a shared organisational capability. When an AI company creates a dedicated AI governance group but doesn't engage their development teams in governance discussions, they create an adversarial dynamic that slows innovation rather than enabling it. Effective governance requires broad participation and ownership. Take the opposite approach - weave governance considerations into every aspect of your AI initiatives, from initial planning through ongoing operations. Invest in building real capability, not just documentation. Engage broadly across teams and levels, creating shared ownership for responsible AI development.
Consider these patterns as warning signs. If you find your organisation falling into any of them, step back and reassess your approach. The goal isn't just to earn a certification or check compliance boxes - it's to build the capability to develop and deploy AI systems responsibly at scale. That requires genuine investment, broad engagement, and a long-term commitment to building real organisational muscle around governance.
That’s the last of my three articles on how to build a business case for good governance. I hope you find the guidance useful in your own company and team, and I would welcome any feedback or thoughts. In the next article, it’s time to get moving - you’ve made your case, you’ve secured the leadership commitment and resources. Time to unbox exactly what exists within an AI management system piece by piece.