What's in a real-world AI Management System? (Part 1)
Understand how the four puzzle pieces of an AI management system fit together in real practice: a governance framework; risk management; operational processes and documentation. Part 1 of 2
In my previous articles1, I wrote about why high-integrity AI governance matters, how it differs from checkbox compliance, and how a thoughtful approach to AI assurance can accelerate innovation while protecting against catastrophic failures like those experienced at GM Cruise. We've made the business case for investing in proper governance and discussed how to secure genuine leadership commitment. But I haven’t yet covered what exactly is in a real-world AI Management System (AIMS). So that’s what this article is about – just the bare essentials from my experience implementing governance systems for real and sustained use at Amazon and Microsoft.
An AI Management System is a lot more than a collection of documents or a set of rules - it's an evolving mechanism that actively shapes how an organisation develops, deploys, and manages AI technologies. Yes – it does include policies and procedures, and it does require high quality documentation, but its real power lies in how it guides decisions, coordinates actions, and ensures accountability across every aspect of AI development and operation.
Four pieces of the puzzle
I look at an AI Management System as having four interlocking elements that each have to work well together, as well as with the larger organisation: a governance structure that forms the decision-making backbone, a risk management framework that serves as an early warning system, operational processes that guide day-to-day activities, and a documentation system that maintains institutional memory. I’ll walk through each of these at a high level before we dig into details, and in future articles I’ll go into exactly how to build each of these.
The governance framework is the backbone structure - it's where accountability is defined and flows through the organisation. Okay, I know that sounds abstract, but in practice it means having clear and agreed answers to crucial questions like: Who can approve the deployment of a new AI model or application? When does a safety concern need executive attention versus being handled by the development team? What quality or safety evidence must be reviewed before an AI system goes live? It exists so that these kinds of decisions are not left to chance or individual judgment in the heat of a crisis. Instead, it creates clear pathways for decisions, so the right people are involved at the right time with the right information. It helps avoid the chaos, churn and overload when a critical issue emerges and nobody knows who should be informed, or who is accountable to make the right decision. If you feel like every decision and every issue of any consequence needs to go to leadership for visibility and approval – then you have a governance problem. Likewise, if highly consequential issues and decisions get made without any leadership visibility, then you also have a governance problem.
The risk management framework is your organisation's early warning system, but don’t think of it like traditional IT risk management. AI systems present unique challenges - they can develop subtle biases over time, their performance can degrade in ways that aren't immediately obvious, and they often operate in domains where failures could have serious real-world consequences. A proper risk framework doesn't just identify these potential issues - it helps organisations understand them deeply enough to take meaningful action. It considers not just technical risks but ethical implications, societal impacts, and long-term consequences. You need this framework to be sophisticated enough to catch subtle issues while remaining practical enough for daily use. The most important characteristic of an effective risk management framework is that it enables communication across different disciplines and up the management chain. I’ve seen beautifully crafted and curated risk dashboards used by nobody but a small group in the risk management team. As an artifact of facade, they are worse than useless. Risk management means communicating, debating and acting upon risks.
Operational processes are where principles turn into practice. These are the day-to-day procedures that guide how your teams actually work with AI systems. They need to cover operationally significant activities from how data scientists document their model development to how operations teams monitor deployed systems. But these aren't just checkboxes to tick - they're carefully designed workflows that embed responsible practices into regular work. For instance, they ensure that bias testing isn't an afterthought but an integral part of model validation, or that monitoring for model drift happens automatically rather than waiting for problems to become obvious, or that customer reports of ‘odd’ results don’t just get lost in the noise. They change and evolve with knowledge, experience and scale. Without strong and effective operational processes, the governance mechanisms are just meaningless words on paper.
Finally, the documentation system might sound like the least exciting component (and probably is to be fair), but it's crucial for high-integrity assurance, and it doesn’t have to be the dread of every engineer. Think of it as your organisation's institutional memory for AI - a record that captures not just what decisions were made, but why they were made and what evidence supported them. When questions arise months or years later, whether from regulators, new team members, or during incident investigations, this system provides the context needed to understand past decisions and learn from them. It transforms individual experiences into organisational knowledge.
What makes these four pieces powerful is how they work together in practice. When a team proposes a new AI feature or discovers an issue with an existing system, they combine to ensure the situation is handled appropriately. The governance structure guides who needs to be involved, the risk framework helps evaluate potential impacts, operational processes provide clear steps forward, and the documentation system ensures everything is captured for future reference and learning.
It’s not about creating bureaucracy - it's about building the mechanisms that enable responsible innovation at scale. As I’ve described before, a well-designed AI Management System actually accelerates development by providing clear guardrails within which teams can innovate confidently. When teams know exactly what evidence they need to show and who needs to approve their work, they can move faster while maintaining high standards.
Each of these four components will evolve independently as your organisation grows and your AI capabilities mature. What works when you have a small team developing your first AI application will need to adapt as that team expands and takes on more complex challenges. The overall system needs to be robust enough to ensure consistency while staying flexible enough to adapt to new challenges and opportunities.
So, with that overview, let’s look at each of these components in more detail, and I’ll go through some practical pointers of how to build them effectively and make them work together. But before we do that, let’s briefly divert into six dangerous myths I’ve come across about an AI Management System:
Sidebar – 6 dangerous myths about an AIMS
❌ "It's just bureaucracy in disguise"
✅ A proper AIMS actually reduces bureaucratic overhead by providing clear decision paths and automated workflows. Without one, teams waste countless hours debating who needs to approve what or reinventing processes for each project.
❌ "It slows down innovation and kills agility"
✅ Teams with a well-designed AIMS typically ship faster because they have clear guardrails and don't get bogged down in last-minute compliance reviews or emergency fixes. It's the difference between having traffic lights versus a free-for-all at busy intersections.
❌ "We're too small to need formal governance"
✅ Size doesn't determine risk - even a single AI system making consequential decisions needs proper oversight. Small organisations often benefit most from clear governance because they can't afford major missteps.
❌ "We can just add governance later when we scale"
✅ Retrofitting governance onto existing AI systems is much more expensive and riskier than building it in from the start. It's like trying to add a foundation after the house is built.
❌ "Our engineers are responsible - we can trust their judgment"
✅ Individual judgment, no matter how good, can't replace systematic oversight. The most catastrophic AI failures often happen in organisations full of brilliant, well-intentioned people who lacked proper governance structures.
❌ "We can build the policies and processes to get certification, but don’t distract our scientists and engineers by actually enforcing them"
✅ A high-assurance management system needs to be applied for the benefits to become real. Anything less is checkbox compliance and a mere façade that will crumble in a real crisis.
Unboxing our AIMS - First the governance framework:
I've found that a well-designed governance framework often makes the difference between organisations that can scale their AI initiatives confidently and those that get bogged down in confusion and delays. I’ll try to explain what I think really needs to exist within the governance framework of your AI Management System - just the minimal structures, roles, and mechanisms that make it work in practice.
At its foundation sits your AI policy, but I want to be clear: this isn't just another corporate document destined to gather dust. A policy that is too vague and high-level will result in decision-making practices that are ambiguous and unhelpful, nor should it be so long and complex that nobody reads it. It can make sense to reinforce trust and accountability by aligning with external standards, such as the OECD AI Principles2, which emphasise responsible stewardship and fairness, and ISO/IEC TR 240283, which provides insights on the characteristics of trustworthiness in AI systems. By aligning your governance structure to these frameworks (while still tailoring them to your specific needs), you help ensure that your AI initiatives meet robust global benchmarks and avoid significant gaps or blindspots.
Then comes the AI Governance Committee that usually emerges as a natural extension of existing technology governance. The committee brings together technical leadership, data science expertise, legal counsel, and some of the key business stakeholders to provide strategic oversight. While some large organisations maintain standing panels of external experts, many find it more effective to have targeted advisory relationships they can draw upon for complex challenges.
Day-to-day governance happens through what you might call an "AI Operations Review" - regular forums where technical leads, engineers, and assurance specialists review new deployments, changes, incidents, and performance metrics. This operational layer makes decisions within clearly defined parameters while knowing the triggers to escalate matters to the governance committee.
The AI Governance Lead plays a pivotal role, balancing hands-on work with engineering and science teams alongside governance oversight. They need clear lines of communication to both technical leadership and business units to create effective escalation paths when issues arise. This role requires someone who can bridge the technical and governance domains, translating between engineering realities and organisational responsibilities. It’s not an easy role, requiring some significant dexterity and a broad set of skills.
The authority structure maps out exactly who can make which decisions throughout an AI system's lifecycle. A clear tiered approach works well: decisions technical teams can make independently, those requiring operational review, and those demanding committee oversight. This gets documented in decision matrices that specify what evidence is required for different approval levels - from routine model updates to deploying AI in entirely new domains.
Integration with existing management systems proves crucial for sustainability. Rather than creating parallel processes, successful organisations extend their current security reviews, change management, and incident response procedures to encompass AI-specific considerations. Change advisory boards add AI criteria to their checklists, security teams expand to include AI incidents, risk registers grow to include AI risks - all while maintaining consistent assessment methodologies.
While documentation forms the backbone of governance, equally important are the communication channels that help identify and resolve issues early. Regular touchpoints between key players often prevent small concerns from becoming major problems. The most effective governance frameworks create clear pathways for both formal oversight and informal collaboration, never hindering the freeform collaboration and problem-solving that can provide quick resolution and progress.
So, this is just a very high-level overview, and in a subsequent article I’ll provide more detail and practical specifics, even some templates to help you get started. The key thing to take away is that governance isn't about creating bureaucratic barriers - it's about enabling confident innovation by ensuring decisions are made at the right level with the right information by the right people. When teams understand exactly what approvals they need, what evidence to provide, and who to engage for different decisions, they can move faster. A well-designed governance framework becomes the foundation that allows an organisation to scale their AI initiatives responsibly.
In my next article, I’ll continue through the three other parts of the puzzle: the risk management framework, operational processes and the documentation system you’ll need to put in place. You can always find all the other articles in this series on real-world AI Management Systems at The Company Ethos.
https://www.oecd.org/en/topics/ai-principles.html
https://www.iso.org/standard/77608.html
This is really great. It helps put together all of the various parts of what is needed for good AI management into 4 really clear boxes !!!
Thanks for sharing your knowledge and wisdom. Great series around AI management Systems.