My AI-supported risk analysis assistant mirrors a common pitfall in risk management: focusing on irrelevant controls rather than genuine threats.

I have created a risk analyst AI based on industry best practices, or so I assume. This is part of a quest toward more compliance automation, because as an industry we are falling behind in security.

I am running through a simple example of a chatbot that answers questions over a nonsensitive dataset. The analyst dives deep into questions on all kinds of controls that, in my view, are quite irrelevant to the risk at the business level.

Looking a little deeper it feels that it is overanalyzing the architecture, without filtering it by the actual harm that can be done.

For example, in the system under analysis, there is no sensitive data. That means there is no relevant data confidentiality risk, and encryption controls for data in motion are therefore irrelevant. Even discussing encryption is a waste of time.

At this moment, I feel the same frustration that pops up in a lot of IT risk decisions: people being bombarded with requirements for controls that do not affect the actual risk that the system owner and the system users have.

I know the answer that I give in my teaching and consulting. If there is no realistic path from potential threat to actual pain of some stakeholder, don’t bother about the threat.

But maybe I can configure or train the analyst to pay more attention to this. What do you think?