A practical outline of how to build the core risk management framework for AI Governance, starting with approaches to risk categorisation and an overview of useful guidance and resources.
@James - Thanks for this great AI risk topic introduction. Unfortunately, the vibe coding hype is going to be a huge increase of risk issues. The quality of software application coding has been dropped down enormously. A lot of managers and none-engineers are thinking engineering skills are not necessary anymore and coding can be done by everyone via LLMs.
You might be right - that's certainly the 'vibe'. But I'm not so sure that's how this plays out. The reality is that there is a vast, vast amount of awful legacy code out there, hiding behind firewalls, riddled with vulnerabilities. There's an extraordinary amount of access creep in most organisations, where users accumulate access to information they don't ever need, including personal data. Data proliferates in at best a semi-controlled state in the majority of enterprises. Humans can't scale to these problems - the best engineers can't fix them.
Possibly it works out different. I think increasingly sophisticated models can actually identify vulnerabilities and resolve deficiencies in code to a quality bar, and certainly at a speed that surpasses most senior engineers. They can systematically identify and resolve access problems, they can identify uncontrolled data and protect it. However, they make mistakes and take wrong directions at an unacceptable rate - so they need to be guided by thinking, competent humans - by engineers.
I think someone with no engineering skill, understanding or experience building a production-ready application alone is absurd - the state of the art is nowhere near adequate. Every suggestion I've seen otherwise is no more than marketing or wishful thinking. But a senior engineer (even one whose skills may be a little out of date), one who understands scaling, operational bottlenecks, data structures, algorithmic complexity, threat models (ie. the enduring wisdom and talent of a skilled software engineer) - they can be extraordinarily productive with these new models.
And combining that human talent with the speed and rigour of a sophisticated coding model - that's where I think the real benefit lies. So I get the concern about vibe coding, but my bet right now is that, in a well governed enterprise, it will be little more than a passing fad.
You are propably right, there is a lot of legacy code as you described it. And I have been a lucky one mainly worked with great people in the past; means with agile concepts even before this wording was internationally born and very good software architects. So, my opinion is, GenAI tools are great if you know how good SW architecture looks like and what good coding styles and concepts are. If you don’t know it and not aware of your own limitations, the coding results are of low quality. It is the engineering hell if management expects to fix it by experienced engineers afterwards. This happens to often now.
Thank you for sharing this knowledge! I have recently started with AI Governance and your content has been very helpful 😀
For reference #9, it links to a paper on RAG evaluation instead of discussing MITs AI risks
Thanks Trena, sorry about that. It's updated with the correct link now
@James - Thanks for this great AI risk topic introduction. Unfortunately, the vibe coding hype is going to be a huge increase of risk issues. The quality of software application coding has been dropped down enormously. A lot of managers and none-engineers are thinking engineering skills are not necessary anymore and coding can be done by everyone via LLMs.
Thanks for reading Ilona,
You might be right - that's certainly the 'vibe'. But I'm not so sure that's how this plays out. The reality is that there is a vast, vast amount of awful legacy code out there, hiding behind firewalls, riddled with vulnerabilities. There's an extraordinary amount of access creep in most organisations, where users accumulate access to information they don't ever need, including personal data. Data proliferates in at best a semi-controlled state in the majority of enterprises. Humans can't scale to these problems - the best engineers can't fix them.
Possibly it works out different. I think increasingly sophisticated models can actually identify vulnerabilities and resolve deficiencies in code to a quality bar, and certainly at a speed that surpasses most senior engineers. They can systematically identify and resolve access problems, they can identify uncontrolled data and protect it. However, they make mistakes and take wrong directions at an unacceptable rate - so they need to be guided by thinking, competent humans - by engineers.
I think someone with no engineering skill, understanding or experience building a production-ready application alone is absurd - the state of the art is nowhere near adequate. Every suggestion I've seen otherwise is no more than marketing or wishful thinking. But a senior engineer (even one whose skills may be a little out of date), one who understands scaling, operational bottlenecks, data structures, algorithmic complexity, threat models (ie. the enduring wisdom and talent of a skilled software engineer) - they can be extraordinarily productive with these new models.
And combining that human talent with the speed and rigour of a sophisticated coding model - that's where I think the real benefit lies. So I get the concern about vibe coding, but my bet right now is that, in a well governed enterprise, it will be little more than a passing fad.
You are propably right, there is a lot of legacy code as you described it. And I have been a lucky one mainly worked with great people in the past; means with agile concepts even before this wording was internationally born and very good software architects. So, my opinion is, GenAI tools are great if you know how good SW architecture looks like and what good coding styles and concepts are. If you don’t know it and not aware of your own limitations, the coding results are of low quality. It is the engineering hell if management expects to fix it by experienced engineers afterwards. This happens to often now.