Avoiding Fracture between AI and Corporate Governance: A cautionary tale
Australia's Robodebt disaster is a lesson in how silos of governance fail, and why AI governance must be woven into corporate governance.
Building an effective high-integrity AI Management System requires grappling with an uncomfortable truth: governance that appears robust on paper can still collapse catastrophically when disconnected from broader organisational oversight. This reality was laid bare by a recent catastrophic failure of automated decision-making in Australia. Australia's Robodebt scheme, implemented in 2016 by the Australian government, used automated data matching to accuse hundreds of thousands of welfare recipients of fraud. The system's fundamental flaw was simple but devastating - it averaged annual income across fortnights to identify supposed overpayments, ignoring that many recipients had variable incomes that fluctuated throughout the year. Vulnerable people received massive, incorrect debt notices, often for tens of thousands of dollars. The human cost was staggering. The stress and trauma is thought to have contributed to at least three known suicides1.
“Robodebt was a crude and cruel mechanism, neither fair nor legal, and it made many people feel like criminals.”
Commissioner Catherine Holmes2
What's most revealing about Robodebt isn't just its technical flaws - it's how an advanced analytical system with apparently robust governance could go so catastrophically wrong. The scheme had sophisticated technical controls, formal oversight processes, and documented procedures. If you’ve ever worked with Services Australia, you know that they put heavy emphasis on procurement, IT security compliance, policies, processes and procedures. It is one of the largest government agencies in Australia, they process $2bn of payments each week. Yet when the Administrative Appeals Tribunal (a regulatory body with an oversight function) began ruling the scheme unlawful as early as 2016, their decisions never properly reached key decision-makers. When frontline staff witnessed the devastating impact on members of the community, their concerns vanished into the void between technical oversight and organisational governance. Sophisticated technical and procedural controls became a facade masking fundamental failures in integrated oversight3.
By 2017, there were 132 separate tribunal decisions finding the scheme's debt calculations legally invalid. Yet the system continued operating for years afterward, protected by governance structures and bureaucracy that existed in isolation from broader organisational oversight. These disconnects between technical controls and organisational governance allowed leaders to "double down" on the scheme even as evidence mounted of its fundamental flaws. The Royal Commission would later cite "venality, incompetence and cowardice" as factors that sustained the scheme despite clear evidence of its failings.
There is real risk that a similar disconnection plays out within the many organisations implementing AI systems today. A cloud provider might perform extraordinary model validation and technical assessments while failing to connect them in any way to enterprise risk management processes or consider the impact of misuse. A financial services firm might implement robust security monitoring without integrating it into their fraud and insider manipulation oversight mechanism. A manufacturer might create advanced AI-enabled vision in a miniature device, without reflecting how such a device could be used for surveillance and privacy intrusion.
I don’t believe the solution is to build more rigidity and bureaucracy or create a silo of AI governance - instead it's to weave AI governance into your organisation's existing fabric.
AI governance has to begin with understanding how your organisation already manages risk, ensures quality, and maintains compliance. Where do decisions get made? How does information flow? What processes already exist for handling issues or approving changes? How are security or privacy incidents currently handled, and could those mechanisms be adapted? These existing governance mechanisms have evolved over time to match your organisation's culture and needs. They're the foundation you need to build upon and evolve.
I always start with a new team or organisation by looking for natural connection points where AI governance can plug into existing processes. If you have an established change management system, you’ll want to extend it to cover AI model updates rather than creating a parallel process. If you already have risk assessment procedures, you will probably want to add AI-specific considerations rather than building a separate framework. When you need new controls specific to AI, design them to complement and connect with existing governance rather than operating independently.
This integration takes effort and careful thought. You'll need to identify gaps where existing processes need enhancement. You'll need to train people on new considerations while leveraging their existing expertise. You'll need to carefully document how AI governance connects to other domains. But this investment pays off in creating governance that works in practice, not just on paper.
This isn't just about efficiency - it's about effectiveness. When AI governance is truly integrated, issues surface through multiple channels. Problems trigger responses across domains. Information flows naturally to decision-makers through established pathways. The system becomes resilient through interconnection rather than isolation.
Building bridges to existing governance practices
Let’s turn to some practical techniques for mapping these connections and building these integrations. The goal isn't to create perfect governance - it's to create governance that works, learning from failures like Robodebt to build systems that genuinely protect stakeholders while enabling innovation.
This is a bit like detective work - you're looking for both the formal structures everyone knows about and the informal pathways where real decisions happen. When I led the implementation of an AI Management System recently, we started by spending some weeks observing how decisions actually flowed through the organisation. We discovered that while there was a formal qaulity review board that approved all new systems for launch, most of the real technical governance happened in informal "pre-reviews" at an engineering level where proposals would be shaped and decisions made before they ever reached the documented review process. This wasn’t bad at all - it was just important for us to be aware of as we sought to insert new controls and mechanisms.
This kind of approach proves invaluable when integrating AI governance. Rather than creating new approval mechanisms that might be bypassed or resented, it proved much better to work directly in those existing pre-reviews to include AI-specific considerations. Teams embraced this because it worked with their established patterns rather than disrupting them. More importantly, it meant AI governance considerations entered the conversation at the right time - when designs were still fluid enough to incorporate them naturally.
The key is to look for these natural integration points across your organisation. Start with the formal structures - your risk committees, change advisory boards, security reviews, and quality gates. Document how they interact and where decisions flow between them. But don't stop there. Talk to the people who make things happen day-to-day. Where do they go when they need approval or guidance? What informal channels do they use to get feedback on ideas? Which stakeholders do they consult before taking proposals through formal reviews?
The goal isn't just to find what exists, but to understand how it works in practice. When issues arise, how do they get escalated? When changes are needed, how do they get approved? When guidance is required, where do people look for it? This operational understanding helps you design AI governance that will work within your organisation's natural flows rather than against them.
Pay particular attention to how information moves between different teams and business leadership. This is where governance often breaks down, as we saw with Robodebt. Technical concerns need clear pathways to reach decision-makers, especially when they touch on ethical implications or potential harms. Similarly, business priorities, regulatory communications and constraints need to flow effectively to technical teams implementing AI systems. In the Robodebt disaster, this outside source of feedback never reached the teams who could have addressed the problem with relative ease.
Once you understand these flows, you can begin identifying where AI governance needs to plug in. Some connections will be obvious - AI risk assessments should feed into enterprise risk management, AI incidents should trigger existing incident response processes, AI changes should go through change management. Others might be less apparent but equally important - like ensuring your AI safety or ethics committee has clear lines of communication to both technical teams and executive leadership.
I recommend you document these connections explicitly, showing how AI governance integrates with existing processes. This documentation serves multiple purposes: it guides implementation, helps train staff on new processes, and demonstrates to auditors how your AI Management System operates as part of your broader governance framework.
But remember - this isn't a one-time exercise. As your organisation's governance evolves, these connections need to evolve too. Regular reviews ensure your AI governance remains properly integrated rather than drifting into isolation. This ongoing attention to integration might seem like extra work, but it's far less costly than dealing with the consequences of governance failures like those we saw with Robodebt.
The overlap between security, privacy and AI governance
Many organisations exploring AI governance already have established security management systems, often certified to ISO 27001, and privacy frameworks aligned with standards like ISO 27701. This existing foundation is invaluable - not just as a reference point, but as a practical base to build upon. The overlaps between security, privacy, and AI governance are significant and intentional. All three domains share core principles of risk management, stakeholder protection, and systematic oversight.
Consider how these domains naturally intersect. Security management systems already provide frameworks for assessing risks, implementing controls, and monitoring effectiveness. Privacy governance establishes principles for responsible data handling and protection of individual rights. AI governance extends these foundations to address new challenges like algorithmic bias, model drift, and automated decision-making. Rather than creating parallel processes, a well-designed AI Management System should integrate with and build upon these existing frameworks.
For example, if your organisation has ISO 27001 certification, you already have:
A functioning risk assessment methodology that can be extended to cover AI-specific risks
Incident management processes that can be adapted for AI-related issues
Document control systems that can accommodate AI governance records
Management review cycles that can incorporate AI oversight
Training and awareness programs that can be expanded to cover AI responsibilities
However, if you’re working in a growing startup or for some other reason don’t have established security management systems, then in my opinion, you should implement those foundations first - or at minimum, develop them alongside your AI governance framework. The reality is that without effective security management, every other aspect of assurance is somewhat meaningless. And achieving ISO 42001 certification may be practically impossible without robust security controls in place. Many of the controls required for AI governance simply depend on having basic security and privacy practices as a foundation.
This is particularly true when it comes to technical controls. Security mechanisms for access control, encryption, and monitoring provide essential infrastructure for protecting AI systems and their data. Privacy controls for data minimisation, consent management, and individual rights directly support responsible AI practices. The ISO42001 standard defines no real technical controls - instead, they are governance and operational - assuming that ISO27001 and ISO27017 aligned foundations are already in place with their technical controls.
If you have functioning governance committees, risk registers, and escalation paths, build on them rather than creating parallel structures. Your existing incident response teams probably already handle security and privacy issues - they can be trained to recognise and respond to AI-related incidents as well. The same applies to change management, supplier assessment, and other operational processes.
Sidebar: A method for mapping the field of play
Here's how I go about systematically mapping the governance landscape that an AI Management System needs to integrate with. I'll guide you through key steps, as I do them - if you know better ways, I’d love to hear about them.
1️⃣
Start by gathering three critical sets of documents that tell you where governance should be happening: your organisational chart, your risk and compliance register, and your current documented policies and procedures. These give you the official picture - they won't tell you the whole story - but they’re a start. If you have former audit reports on security or privacy compliance, then they may also help join the dots.
2️⃣
Identify the committees where significant technology and risk decisions get made. The formal committees are easy to spot - Architecture Review Boards, Risk Committees, Change Advisory Boards. But look deeper. I bet you’ll find meetings like "Tech Leads Sync" and “Security Monthly Reviews”, which appear nowhere in formal documentation, but are the places where many design decisions are actually shaped. These informal but influential groups need to be part of your governance integration. If you can, try to sit-in both the formal and informal meetings. For each, figure out not only its formal charter but its real operational patterns. Who actually attends? How are decisions really made? What gets escalated and to whom?
3️⃣
Now, move onto the next step of mapping individual roles and responsibilities. Start with the official owners of risk, compliance, and technical governance. But then look for the informal leaders - the experienced engineers everyone consults before big decisions, the science leaders making research decisions and evaluating results, the product managers who seem to know how to get things done, the risk analysts whose opinions carry weight, the incident responders who can tell you who really makes decisions in a crisis. These are the people who make governance work in practice. Pay special attention to how information flows between technical teams and business leadership. Understanding these gaps helps you design governance that bridges them.
4️⃣
So now you understand the forums where decisions get made, and the people involved in those decisions - now you want to track some actual decisions through your organisation. Pick a few recent significant changes - a major system deployment, a critical incident, an important policy update. Follow them through your governance structure. Where did they start? Who reviewed them? How were they approved? Where did they get stuck? Again, this reveals how governance really works, not just how it's supposed to work. You might find that some steps have become almost ‘ceremonial’, others are bureaucratic processes that nobody can even rationalise any longer. Sometimes you’ll find that real decisions about changes are being made in informal engineering discussions, with the formal process merely documenting what was already decided. It’s not always about fighting this pattern, sometimes it’s just about not wasting time or effort integrating with processes that don’t yield any value.5️⃣ Now examine your monitoring and reporting mechanisms. What metrics drive decisions? How is risk measured and reported? Where do early warnings come from? Document where these mechanisms work well and where they break down. Are warnings reaching the right people? Is information actionable? Does feedback lead to meaningful changes? These insights help you design AI governance that leverages effective channels while fixing broken ones.
6️⃣
The final step focuses on integration points. For each major aspect of AI governance - risk assessments, model validation, incident response, performance monitoring - start thinking about where it naturally connects to existing processes. Look for opportunities to extend rather than duplicate. In subsequent articles, I’ll go through formally building your AI governance framework and we’ll return to this topic of integration, but even now while you’re still mapping the existing governance practices, you’ll be naturally spotting some points of future interconnect.
This mapping process takes time - typically several weeks of interviews, document reviews, and observation in a medium to large organisation. But this investment pays off many times over in governance that works with your organisation rather than against it. The goal isn't just to understand how things work now, but to identify how high-integrity AI governance can become an integral and sustainable part of your organisation's decision-making fabric.
As we explore specific aspects of AI governance, risk management and control frameworks in coming articles, these overlaps will become increasingly apparent. We'll see how security principles inform AI system protection, how privacy requirements shape data governance, and how existing risk frameworks can evolve to address new AI challenges. This integrated approach not only makes implementation more manageable - it makes your governance more effective. When security, privacy, and AI governance work together, issues are more likely to be caught, escalated appropriately, and addressed comprehensively. The alternative - separate, parallel systems - creates exactly the kind of gaps and disconnects that led to failures like Robodebt.
At this point in our journey, we've mapped the terrain thoroughly - first understanding our organisation's AI systems from both business and technical perspectives, then exploring how they must connect with your existing governance landscape. This groundwork is essential. The Robodebt tragedy shows what happens when we fail to understand these connections - when governance exists on paper but not in practice, when technical and operational controls operate in isolation from organisational oversight, and when warning signs get lost in the gaps between systems.
Now we're ready to build something more robust - starting with the core Governance Framework that will form the foundation of your AI Management System. My next article begins this crucial phase of the work. Stay with me - we're moving from mapping to making, and there's much more ahead.
https://www.pmc.gov.au/sites/default/files/resource/download/gov-response-royal-commission-robodebt-scheme.pdf
https://www.monash.edu/__data/assets/pdf_file/0007/3365503/report_of-the-royal-commission-into-the-robodebt-scheme.pdf
https://lsj.com.au/articles/crude-cruel-and-unlawful-robodebt-royal-commission-findings/