As someone who is both an aviation enthusiast and a provider of governance tools for AI governance, the only stance I can take is one of complete agreement. Risk management documentation might help reduce legal risk, but if the underlying technical reality and the documents are disconnected from each other and if no-one in leadership can understand the implications of what the risk analyses really say, then it's all just kabuki theatre.
Excellent insight. We've been working for a few years to apply the methodologies of human factors in industrial security into the practice of Model Risk Management and Human Oversight, but it's a long road ahead. GRC managers seem to prefer qualitative risk assessment to evidence-based analysis. https://arxiv.org/abs/2009.08127 and https://hal.science/hal-04046408v1/file/ihm2023_baudel_en.pdf
Great piece, James. Really liked the framing around culture, especially the idea that we need to allow the right culture to emerge, rather than try to legislate it into existence.
The more rules we layer on, the more we incentivise people to play the system instead of actually building for safety.
I have been saying this for years and it's still true: " AI governance today suffers from a troubling disconnect: it's populated by policy and legal professionals without deep technical understanding or experience, while the engineers and data scientists who actually build and monitor AI systems are treated as subjects of governance rather than partners in creating it."
FWIW, I do predict (using myself as an example) that more engineers and engineering leaders will make their way into the field.
As someone who is both an aviation enthusiast and a provider of governance tools for AI governance, the only stance I can take is one of complete agreement. Risk management documentation might help reduce legal risk, but if the underlying technical reality and the documents are disconnected from each other and if no-one in leadership can understand the implications of what the risk analyses really say, then it's all just kabuki theatre.
Loved this post. Thank you.
Excellent insight. We've been working for a few years to apply the methodologies of human factors in industrial security into the practice of Model Risk Management and Human Oversight, but it's a long road ahead. GRC managers seem to prefer qualitative risk assessment to evidence-based analysis. https://arxiv.org/abs/2009.08127 and https://hal.science/hal-04046408v1/file/ihm2023_baudel_en.pdf
Great piece, James. Really liked the framing around culture, especially the idea that we need to allow the right culture to emerge, rather than try to legislate it into existence.
The more rules we layer on, the more we incentivise people to play the system instead of actually building for safety.
A very worthwhile read.
Thanks for reading Camilo, glad you enjoyed it
I have been saying this for years and it's still true: " AI governance today suffers from a troubling disconnect: it's populated by policy and legal professionals without deep technical understanding or experience, while the engineers and data scientists who actually build and monitor AI systems are treated as subjects of governance rather than partners in creating it."
FWIW, I do predict (using myself as an example) that more engineers and engineering leaders will make their way into the field.