Practical, proven methods to assess AI risks and a straightforward way to factor and communicate unique aspects of AI risk, including dynamic feedback loops and impact velocity.
thanks for the excellent post James - makes a lot of sense to capture the dynamics of risk materialisation in the overall risk assessment. This exposes the challenges then with the EU AI act defining risk tightly as "the combination of the probability of an occurrence of harm and the severity of that harm" article 3.2. This seems to present a obstacle to adopting such an amplification score or other approach to risk management adapted to the specific risk characteristics of AI systems. Should the EC consider adopting a wider definition of risk to allow exploration of innovation in risk scoring such as you suggest, e.g. the ISO 31000 definition of risk as " effect of uncertainty on objectives"?
thanks for the excellent post James - makes a lot of sense to capture the dynamics of risk materialisation in the overall risk assessment. This exposes the challenges then with the EU AI act defining risk tightly as "the combination of the probability of an occurrence of harm and the severity of that harm" article 3.2. This seems to present a obstacle to adopting such an amplification score or other approach to risk management adapted to the specific risk characteristics of AI systems. Should the EC consider adopting a wider definition of risk to allow exploration of innovation in risk scoring such as you suggest, e.g. the ISO 31000 definition of risk as " effect of uncertainty on objectives"?
Thanks for sharing your perspective this informative article. The Amplification factor makes a lot of sense. would like to connect with you