AI Governance has a Culture Problem
AI Governance is becoming mired in process, regulation and compliance theatre. The aviation industry has shown that's not the path to real safety.
On a foggy morning in 1977, two Boeing 747s collided on the runway at Tenerife airport, killing 583 people in aviation's worst ever accident1. The investigation revealed a cascade of human factors: miscommunication, time pressure, hierarchy, and confirmation bias. The KLM captain, ironically the airline's head of safety and considered "the best of the best," had created a cockpit environment where junior crew members didn't feel empowered to challenge his decisions, even when they sensed something was wrong. But there was another factor that illustrates why arbitrary safety rules can actually create danger when disconnected from real practice.
The crew of the KLM 747 faced a terrible dilemma. In 1974, the Netherlands had introduced strict flight duty time limits with severe penalties. Pilots could lose their licenses or even face imprisonment for exceeding them. A 1976 law made these calculations so complex that pilots had to call their company each flight to learn their limits. At Tenerife, the KLM crew found they might not make it back to Amsterdam before their duty time expired. If they couldn't depart by 19:00, they'd have to cancel the flight, stranding 235 passengers and 14 crew on a small island at peak tourist season.
This pressure, created by safety regulations meant to prevent tired pilots from flying, contributed to rushed decisions in deteriorating weather. The very rules designed to enhance safety had become a source of danger.

As the worst accident in aviation history, it jolted the industry into action. Their response transformed not only procedures but fundamentally shifted its safety culture. Instead of simply adding more rules about radio comms, pre-flight checklists, and runway markings, the aviation industry embraced a revolutionary approach: understanding why things usually go right, not just cataloging what went wrong. They set out to build a culture where pilots could report mistakes without fear, where crews were trained to speak up regardless of seniority, and where learning from normal operations became as important as investigating accidents.
Today's AI governance stands at a similar crossroads. As AI systems become critical infrastructure, powering everything from medical diagnoses to financial decisions, we're predictably responding with an avalanche of laws, rules, standards, and compliance frameworks. The EU AI Act categorises risks and prescribes complicated procedures. ISO standards multiply - 82 AI standards published or in development at last count2. Countries around the world are proposing new laws, regulations, and policies. Documentation requirements expand. Companies scramble to hire compliance officers who often don't understand the technology they're meant to govern, and are ill-equipped to guide safety initiatives.
We're building the same kind of culture that aviation had before Tenerife: one that believes safety comes from compliance with ever-more-detailed procedures, assessment reports and approval processes. And just like pre-Tenerife aviation, we're missing what actually creates safety in complex systems.
The Tale of Two Safeties
Erik Hollnagel's distinction between Safety I and Safety II thinking illuminates our predicament3. Safety I defines safety as the absence of accidents, things that go wrong. It responds to failures by adding rules, barriers, and constraints. When something bad happens, Safety I asks "who failed?" and "what rule should we add?" It assumes that if people just follow procedures correctly, systems will be safe.
Safety II takes a radically different view. It defines safety as the ability to succeed under varying conditions. Instead of obsessing over the rare failures, it studies why things go right most of the time. Safety II recognises that people create safety through constant small adaptations. Instead of "who screwed up?", it asks "how do people usually make this work?"
Think about how these approaches play out in practice. When a medical AI system makes an error, Safety I responds with more documentation requirements, additional approval layers, and stricter audit trails. Safety II would investigate how clinicians normally work with the AI successfully. What informal checks they use, how they validate outputs, when they trust or override its recommendations.
In my view, the EU AI Act exemplifies Safety I thinking at scale. High-risk AI systems must undergo conformity assessments, maintain detailed technical documentation, implement quality management systems, and ensure human oversight, all before deployment. The Act assumes we can anticipate and proceduralise our way around every risk. But organisations will struggle to comply with such a framework that is so massively complicated, and we have no evidence to suggest it will make AI any safer.
Meanwhile, ISO standards like 42001 and 42005 codify perceived "best practices" into institutional processes, possibly creating what amounts to expensive compliance theatre. These standards can be very valuable as guidance and structured frameworks. Done right, they make a positive contribution. But I’ve witnessed organisations investing substantial resources into documentation that few read and processes that are disconnected from engineering reality and can't evolve with the technology. Sooner or later, those companies that implement checkbox compliance based on those standards discover you can't paperwork your way to safety.
Why Culture Eats Compliance for Breakfast
The fundamental problem isn't that rules and standards are bad. It's that they're inadequate for managing safety in complex, adaptive systems. AI deployment happens in messy, unpredictable real-world contexts where:
Use cases emerge that no regulator anticipated
Models behave differently across different populations and contexts
The interaction between human judgment and AI recommendations creates emergent behaviors
Bad actors constantly probe for new vulnerabilities
The technology itself evolves faster than regulations can adapt
I think the concepts of Just Culture4, a cornerstone approach to modern safety, show us a better way. As a mindset, Just Culture creates environments where people can report mistakes without fear of punishment for honest errors, while maintaining accountability for wilful violations and gross negligence. It recognises three categories of behaviour:
Safe behaviours that should be reinforced
Risky behaviours that need coaching to understand why shortcuts aren't worth it
Reckless behaviours that warrant sanctions due to conscious disregard for substantial risk
This nuanced approach enables learning. When an AI system produces unexpected outputs or gets manipulated in novel ways, do practitioners feel safe reporting it? Or do they hide problems or explain them away to avoid liability or unproductive work under rigid compliance frameworks?
The current culture in AI governance risks driving behaviour underground. Engineers discover vulnerabilities but fear reporting them might trigger compliance violations. Data scientists make informal adaptations to make systems work safely but don't document them as they’re inconsistent with ‘procedure’. Risk management teams underrate or avoid reporting risks that just result in more paperwork, scrutiny or criticism from leadership. Elegant, sanitised artifacts are presented to auditors for rubber-stamping into certifications while true problems and issues are concealed for fear of an adverse audit finding. Organisations achieve paper compliance while not meaningfully enhancing safety.
Building Adaptive Capacity for AI Safety
What would Safety II look like for AI governance? Instead of trying to anticipate every risk through pre-deployment procedures, it would build adaptive capacity: the ability of people and systems to handle unexpected situations successfully.
Aviation's evolution offers a roadmap. After Tenerife, the industry didn't just change rules; it revolutionised training and culture. Crew Resource Management (CRM) teaches all crew members to speak up about safety concerns regardless of hierarchy. Airlines study normal operations to understand how safety actually emerges. Just Culture protects those who report problems while maintaining accountability.
For AI governance, this means shifting focus from compliance documentation to capability building:
1. Bridge the Technical-Governance Divide. AI governance today suffers from a troubling disconnect: it's populated by policy and legal professionals without deep technical understanding or experience, while the engineers and data scientists who actually build and monitor AI systems are treated as subjects of governance rather than partners in creating it. We need to actively bring technical practitioners into governance roles and strengthen the insight of policy and legal practitioners into technical contexts. This means creating cross-functional teams where engineers help shape policies that are technically feasible and meaningful, and where governance pros understand enough about model architectures, training processes, and deployment challenges to craft rules that enhance rather than hinder safety. The best safety practices emerge when those who understand the technology deeply are empowered to influence how it's governed and develop adaptations.
2. Practical Skills Development. We need fewer people who can fill out compliance matrices and more who understand how AI systems are built safely, and how they fail in the real-world. Just as the field of cybersecurity has long recognised that hands-on adversarial training produces the most effective defenders, the AI field needs to embrace this approach. This would put more attention on AI red team training programs where participants actually break systems, understand attack patterns, identify bias, recognise privacy risks and develop intuition for AI safety issues — mirroring the proven methods that transformed cybersecurity education from theoretical exercises to practical skill-building.
3. Learning from Success. Instead of only investigating AI failures, we should systematically study successful deployments. How do effective teams validate AI outputs? What informal practices emerge for catching errors? When do practitioners trust versus override AI recommendations? This knowledge is far more valuable than another compliance checklist.
4. Psychological Safety for Reporting. Organisations need clear policies protecting those who report AI safety issues in good faith. The aviation model shows this works: when people know they won't be punished for honest mistakes, reporting increases dramatically, creating rich data for improvement.
5. Behavioral Incentives Over Procedural Requirements. Rather than mandating specific processes, governance frameworks should incentivise safe behaviours and embed meaningful, lightweight activities into the AI lifecycle. For example, performing rapid system impact checks during development sprints, rewarding teams that discover and responsibly disclose vulnerabilities or limitations, or recognising practitioners who develop innovative safety practices and transparently disclose information that others can adopt.
6. Embrace Productive Friction. Some inefficiency serves safety. Teams need time to investigate hunches, to discuss concerns, to develop shared understanding. Build this slack into development cycles rather than optimising it away. When problems arise, leaders should ask "What can we learn?", not "Who is responsible?" This cultural signal shapes whether people hide problems or surface them early.
The Path Forward
The choice between Safety I and Safety II isn't academic. Every day we spend building compliance bureaucracies is a day we're not building real adaptive capacity. Every engineer and data scientist filling out documentation could be developing practical safety skills. Every rigid procedure prevents the adaptations that create actual safety.
Aviation learned this lesson through tragedy. After Tenerife, the industry realised that their best pilot, following all procedures, under pressure to comply with strict rules, could still cause the deadliest accident in history because the culture was wrong. The transformation that followed made aviation one of the safest industries in the world.
AI governance doesn't need its own Tenerife moment to learn these lessons. We can choose now to build cultures that create real safety:
Where practitioners have the skills to identify and mitigate real risks
Where organisations learn from what goes right, not just what goes wrong
Where people feel safe reporting problems and sharing adaptations
Where governance evolves with technology rather than constraining it
Where we recognise that humans create safety through judgment, not compliance
The current trajectory leads to a dangerous place: organisations that are compliant and certified as "safe" on paper while real risks multiply in practice. Teams afraid to report problems. Innovation blocked or delayed for no meaningful benefit. A widening gap between regulatory fiction and operational reality. The unnecessary waste and irrelevance of checkbox compliance and the inexorable slide towards malicious compliance5.
But there's another path. One where AI governance starts with culture, builds practical capabilities, and recognises that safety emerges from how people actually work with these systems. Where we stop trying to eliminate the human element and start enabling it.
The question isn't whether we need governance for AI. We absolutely do, and we need effective laws, regulations and standards to support it. The question is whether that governance will create real safety or just the illusion of it. Aviation chose real safety after paying a terrible price. We can choose it now, before the bill comes due.
In my opinion, the transformation starts with a simple recognition: safety isn't something we achieve through compliance. It's something we create, every day, through the culture we build and the behaviours we encourage. Compliance is nothing more than a backstop for the most reckless behaviours.
AI governance has a culture problem. We still have time to fix it.
https://admiralcloudberg.medium.com/apocalypse-on-the-runway-revisiting-the-tenerife-airport-disaster-1c8148cb8c1b
https://www.iso.org/committee/6794475.html
https://www.researchgate.net/publication/285396555_Erik_Hollnagel_Safety-I_and_Safety-II_the_past_and_future_of_safety_management
https://sidneydekker.com/books
As someone who is both an aviation enthusiast and a provider of governance tools for AI governance, the only stance I can take is one of complete agreement. Risk management documentation might help reduce legal risk, but if the underlying technical reality and the documents are disconnected from each other and if no-one in leadership can understand the implications of what the risk analyses really say, then it's all just kabuki theatre.
Loved this post. Thank you.