Discussion about this post

User's avatar
R Y I O N P U N's avatar

Really appreciated this article. It nails the importance of building multi-layered governance structures with dynamic safety controls and real-time oversight. And yes, absolutely: engineers need to be at the heart of AI governance. One thing I’ve been thinking about a lot lately: so much of today’s AI, especially open-source models and API-first tools, comes out of decentralized, fast-moving environments. There's often no single “owner” or controller. So how do we build safety and governance mechanisms that work in those settings? How do you design for distributed control and escalation when there's no central gatekeeper? Would love to hear your take or if you know others tackling this challenge.

Expand full comment
John Benninghoff's avatar

Hello James, I really like the idea of using STPA to inform AI system design!

I can think of a couple of challenges: first, STPA is difficult to learn, and I haven't seen it adopted as a practice in software companies, with two exceptions - it was partially implemented at Akamai, and more recently was more fully adopted at Google. The engineers at Google have found STPA useful, and recently presented it at SREcon: https://www.usenix.org/conference/srecon25americas/presentation/klein, but there are few orgs outside Google with the resources and incentives to implement STPA.

Second, as the article "A manifesto for Reality-based Safety Science" points out, STAMP as an accident model is largely unchanged since its introduction in 2004 (https://doi.org/10.1016/j.ssci.2020.104654). I should point out that the goal of the paper is to call attention to a more widespread problem in safety science - a lack of empiricism.

Expand full comment
3 more comments...

No posts