Discussion about this post

User's avatar
Karen Smiley's avatar

This is a tour de force, @James Kavanagh. I'm looking forward to your further work on this. Some questions:

1. Does one or more of these standards cover ethical treatment of data workers who do enrichment (labeling or annotation)? (Perhaps under Governance & Leadership?)

2. Do the 5 controls under Safe and Responsible AI cover proactive attention to identifying and mitigating biases?

3. Where do consent, credit, and compensation (3Cs) to creators and environmental resource efficiency fit in?

4. You mentioned "Not building general-purpose foundational models (e.g., this is not for OpenAI, Anthropic - they have some additional requirements under the EU AI Act that are not generally applicable)." Everything in the map (and more) still does apply to the foundational model companies, right?

5. Do you know of any person or organization who is, or will be, tracking which companies have certified their compliance with the standards you include here? (e.g. Anthropic getting ISO 42001 certification recently)

Thanks!

Expand full comment
Bronwyn Ross's avatar

Great work @James Kavanagh. I've undertaken a similar exercise, compiling the recommended practices or controls from some of the frameworks you mentioned (plus some others) and grouping them under 7 domains: Strategy, Governance, Procurement, People, Compliance, Data and AI development. I found it helpful to tag each control by lifecycle phase and potential functional owner also. It was largely manual work, conducted by reading through the source documents and making some judgement calls much as you described....but worth it, to come up with some universal controls that respond to several standards.

Expand full comment
13 more comments...

No posts