6 Comments
User's avatar
Vincent Nunan's avatar

As someone who is both an aviation enthusiast and a provider of governance tools for AI governance, the only stance I can take is one of complete agreement. Risk management documentation might help reduce legal risk, but if the underlying technical reality and the documents are disconnected from each other and if no-one in leadership can understand the implications of what the risk analyses really say, then it's all just kabuki theatre.

Expand full comment
Michael's avatar

Loved this post. Thank you.

Expand full comment
Thomas's avatar

Excellent insight. We've been working for a few years to apply the methodologies of human factors in industrial security into the practice of Model Risk Management and Human Oversight, but it's a long road ahead. GRC managers seem to prefer qualitative risk assessment to evidence-based analysis. https://arxiv.org/abs/2009.08127 and https://hal.science/hal-04046408v1/file/ihm2023_baudel_en.pdf

Expand full comment
Camilo Lascano Tribin's avatar

Great piece, James. Really liked the framing around culture, especially the idea that we need to allow the right culture to emerge, rather than try to legislate it into existence.

The more rules we layer on, the more we incentivise people to play the system instead of actually building for safety.

A very worthwhile read.

Expand full comment
James Kavanagh's avatar

Thanks for reading Camilo, glad you enjoyed it

Expand full comment
AI Governance Lead's avatar

I have been saying this for years and it's still true: " AI governance today suffers from a troubling disconnect: it's populated by policy and legal professionals without deep technical understanding or experience, while the engineers and data scientists who actually build and monitor AI systems are treated as subjects of governance rather than partners in creating it."

FWIW, I do predict (using myself as an example) that more engineers and engineering leaders will make their way into the field.

Expand full comment