How Executives Make Informed Cyber Decisions

Cyber insurers consistently point to legal liability as your greatest exposure. The best predictor of post-breach legal costs isn’t the breach itself—it’s whether you can prove your cybersecurity program was reasonableReasonableness doesn’t mean perfection. It means showing you made balanced, cost-justified decisions to reduce risks to the public as well as your business. Regulators and litigators evaluate how much you invested relative to the risks you sought to prevent. When a breach happens, they’ll find weaknesses—but what matters is whether your program was built to achieve fair, defensible outcomes. Regulators increasingly define “reasonable” cybersecurity as: Safeguards that reduce risk to the public without being more burdensome than the risk itself. Build a defensible risk management program that can withstand attackers, auditors, and attorneys.

SPEAKER: Chris Cronin, ISO 27001 Auditor |  Partner, HALOCK and Reasonable Risk  |  Board Chair, The DoCRA Council

 

TRANSCRIPT

Good morning, everybody. Thank you for coming in on a Thursday morning., I love giving this talk because it combines a lot of things that are really important. Yeah.

Governance and risk management and all this, but also Social ethics, public ethics, living, living a life that you want to live. Any of us involved in cybersecurity knows that we want to protect. That’s what our role is. And sometimes we have a difficult time getting everyone in the organization on board.

This conversation helps you learn how to do that.

So, a little bit about me, I’m that guy.

So I am a partner at HALOCK Security Labs and Reasonable Risk, developed something called Duty of Care Risk Analysis (DoCRA), which I see some familiar faces some people know and use.

Also I’m the principal author of something called the CIS Risk Assessment Method. And then we have Darren Stevenson from CIS who’s gonna be speaking a little later today. That’ll be a good talk.

What I’m talking about today is this rise of governance. Now you’ve heard this come up.

It’s become something people have been talking about in terms of regulation and cybersecurity standards.

But a lot of folks aren’t really sure what we mean by governance. We’re gonna walk you through what that is today, and then how to actually adopt it.

And as you’re using governance, how you can start to take what’s meaningful about your work in protecting people and making it part of sort of a social ethics or public ethics activity.

So first off, this is one of those rules where you are not supposed to be taking on more work yourself. This is not governance where you’re given more load. How many people have a sense when you see a new regulation that it means more tough work for you to do? Is this a common thing when you own new regulation, more work to do?

What we’re going to actually help you see is that governance is about sharing the load and helping everyone know what their role is, helping get some of the weight off of you that other people in your organization will take on.

There’s a Rorschach test that we love to bring people through to get their understanding of what a regulator is there for. We ask, is this regulator handing you a tool or threatening you?

And a lot of folks will say, well, it’s a regulator who’s clearly threatening me with a hammer. Well, if you see regulation that way, then you will fight it. You will think of it as something that’s very difficult to master. If you see that this person is handing you a tool, you can start to say, wait a minute. This regulation actually helps me solve problems that I was not able to solve before. That’s the nature of the new requirements for governance that are coming up.

And who remembers the SolarWinds, attack from 2020 that compromised government systems, corporate systems? SolarWinds was attacked, which meant that attackers were able to get into the networks that SolarWinds was protecting. So, they’re, at the time, current and then recent CEOs were testifying in front of Congress, and they clearly said the reason why this attack happened so that international adversaries could get into government networks was because an intern didn’t handle their password correctly. Thank you very much. I will now go home. And what did everyone do? They said that it is the most disgusting thing I’ve ever seen a CEO do.

Why? Well, for two reasons. One, if you’ve got a system that you’re protecting government networks with and it’s so weak that an intern who screwed up their password could bring it all down, terrible, terrible solution. Right?

Terribly architected. And relatedly, if you’re the kind of organization that says, hey, look, we’re just we’re just the executives here. We can’t control what interns do. Why would you blame us?

This actually sounds different from every other thing that you hear in business.

Right? At Enron, would you say, hey, well, you know, I know there was a lot of fraud going on but that accounting team and they were in a they were in a different office. I didn’t know what they were doing. No.

We don’t expect executives to be able to get off the hook when something big happens, even if it’s anyone on team. They have to take responsibility. So this got a lot of people upset, and of course, then you had the Securities and Exchange Commission (SEC), National Institute of Standards and Technology (NIST), the update to the NYCRR that people think of as NYDFS, Center for Internet Security, a lot of organizations started to say, got to bring governance into this thing. Meaning, executives have to own their part of cybersecurity risk management.

This is a clue as to how this starts to get the weight off of the people who’ve been carrying the weight for cybersecurity. It’s saying, hey executives, you actually have some responsibility here. What we’re gonna spend our time on this morning is saying, How do you do that? How do you make that happen? Alright.

So let’s first clarify terms. When we talk about governance, we mean these things.

Communicating responsibilities and risks to the enterprise. Right? So an executive needs to do that.

Assigning ownership.

Cybersecurity administrator is not assigning ownership, right? That’s an executive activity. Providing resources, budget, prioritization, requirements for collaboration, Testing, and monitoring, think about audit.

Reporting issues and then continuous improvement. Right? All of these things are the responsibility of executives.

Risk management, we tend to think of as understanding the likelihood and the impact of foreseeable threats.

That’s what we mean by risk, likelihood, and impact at least. Planning safeguards, implementing them, and then measuring and reporting. So if you’re doing risk management and you’re reporting up to executives and you’re saying, hey, we’re actually not reducing our risks. We didn’t have the budget to implement these controls so what’s the executive’s responsibility to provide the resources to make sure that you’re able to meet your requirements? This is now governance. This is now required by a lot of new regulations. Can you start to see how that regulator was handing you a tool?

Okay. Now, how are we gonna do this? Alright. This is just the shorter version of that.

Alright. I’m gonna ask a question about maturity assessments because this is one of the real big so HALOCK does a lot of work in helping our clients implement risk management and governance but we are also expert witnesses in a lot of cases. We help regulators, we help plaintiffs’ attorneys, and defense attorneys. We help our clients when they have disagreements with regulators. So we help everyone understand everyone else, and that’s part of what our expertise is.

Breach case after breach case, we see maturity assessments where executives have not been able to make decisions. Right? So I’m gonna do an informal survey here. I’d like to see hands. How many of your organizations have undergone a cybersecurity maturity assessment?

K. Okay. That’s a lot of hands. Okay. How many of you were told that your peers are at three point two or three point four compared to your peers. Okay.

How many of your organizations actually are at three point two or three point four?

Okay.

Everyone, take a look around the room. How many hands are up?

Zero hands are up. Zero hands are up. But when you talk to cybersecurity companies that do maturity assessments, they will tell you your peers are at three point two, three point four.

But that’s just not true. Why do they do that?

Well, if you take a look at a maturity scale, one through five, three’s in the middle.

A cybersecurity consulting firm that is mostly focused primarily focused on just their financial success wants the same client year after year after year. If they go to clients and say, hey by the way, you’re at two point four, you need to get to five, will they be invited back next year?

No, because they set a hurdle that the executives say sounds hard.

If you say, hey, just get over half.

Then the client says, well, get over half. Okay. So what do they do? What do these consulting firms do? They invent a fiction that your peers are a three point two.

It’s an entirely lie. It’s an entire lie, right? But because we know that no hands went up. By the way, every time I do this talk and I do that survey, the maximum hands that are up is one, right? So the reason why they’re doing this is because they wanna be invited back.

What does three point two mean? Well, the law tells you you have to be able to measure and improve your cybersecurity controls. You have to test them. You have to audit them, test them, and fix the ones that are broke.

That’s what four is. Why would you ever tell your clients to get to three point two if four is the minimum for regulatory compliance? Right? The other thing is, would you wanna get \to be as secure as your peers when your peers are getting hacked?

Of course not. So we wanted to subuse people of this maturity assessment approach. We know people like to say, How am I doing now? How was I doing later? But there is zero regulations that tell you to get to a certain maturity score to do maturity assessments at all. No regulation does. They tell you to do risk assessments.

What we know about likely maturity assessments is that they’re not outcomes-based. They just tell you what your score is, but they don’t tell you what will happen if your score remains this way. What would happen if you had a different score? In other words, who are you hurting? What is the likelihood and the impact of them getting hurt? What is the likelihood and impact of your business being okay?

So we wanna be sure that we’re looking at outcomes.

That’s why we’re doing risk analysis. We’re asking ourselves what the likelihood and the impact of something going wrong.

Right? We wanna know not just in terms of how bad it is for us, we wanna be sure we know how bad it is for other people.

How many of you have done risk assessments where you’re doing likelihood and impact analysis?

Okay. We got a very few hands up. How many explicitly say this is the pain that we would go through, this is the pain the outside world would go through?

Hot dog.

Okay. Great.

Explicitly call that out. Right? Why do you do that? Well, first of all, you’re injecting ethical behavior into the organization because you say, we have to be sure our client is okay or our customer is okay. Have to be sure the public is okay. Right?

If you end up having to talk to a regulator and they see that you were looking out for other people, you’ve been able to demonstrate very likely reasonability because you’re looking out for the public. We’ll talk to you about how that happens in a moment.

All regulations require risk assessments and reasonability.

All of them do. None of them requires a maturity assessment because regulations represent who? The public. They wanna be sure that you thought through the harm you could cause to the public, so you manage to do that.

Right? And you’re gonna be able to have that conversation with your executives. So when they say, Well, I thought we were doing maturity assessments. Why don’t we stick with that?

Why do risk assessments? Well, here’s the other reason why we do risk assessments. We don’t do risk assessments because we’re trying to predict the future. We’re doing risk assessments because we’re trying to change the future.

We don’t want to go down the road that our peers have gone through when they’ve had the incidents.

We want to be able to say what’s causing a problem there?

Are we susceptible to that problem? What do we do to reduce that risk?

Alright.

So HALOCK does this assessment of 10-k’s. Do you have public companies represented here?

So with the Securities and Exchange Commission (SEC), there’s this requirement that public companies have to state in their annual public report how they do risk management and governance.

And what we find looking at this is that People are Executives who actually have a very difficult time communicating with their technical teams.

And what’s what we know should happen is that a non-technical executive has to receive information based on how they make decisions.

What we know is that a nontechnical executive makes decisions by thinking about risk to corporate interest, thinking about enterprise budget priorities, and making sure they understand the strategy of the organization. Right? Each executive has some specialized knowledge about how the company works based on what their professional experience has been since then.

A technical manager needs to know the budgets they can work with. What’s their remediation plan? What resources do they have? What problems are they trying to solve?

And the technicians need to know vulnerabilities in systems. What controls can’t they apply because of some technical issue? How are they doing against certain vulnerabilities? Everyone needs a certain level of information.

But what we are seeing is that people are providing technical executives with the information that technicians have.

What happens when an executive, a non-technical executive, is given technical information about the number of vulnerabilities on a Linux server, for example? What can an executive do with that information?

They’re gonna have a hard time doing anything at all. Right? I’ve never had a conversation with a CFO.

Who understands what you know, how patch management actually works. Right? What is the difference between updating a kernel module and an application?

So they can’t do anything with that. We’re not communicating well between executives and technicians. So if we can’t do that, if we’re not communicating well, we can’t get governance to actually work because we can’t get the executives to understand what their role is in fixing stuff.

We’re gonna go into this chart in a little bit, but here’s a hint.

If we have good data about what we want to do and how well we’re doing against what we want to do, and those are based on what our corporate responsibilities are. We can give executives information that they can make a decision with. We’ll go into more depth on this in just a bit.

What I’m about to show you is something based on what we call Duty of Care Risk Analysis (DoCRA). It’s how you do risk assessments that they look at your public responsibilities and what your corporate interest is and make sure they’re in balance. This is gonna help your corporate communications. It’s gonna help you achieve that governance nirvana that the regulators and standards bodies are asking for. It’s also gonna help you if you get into a legal challenge. Again, we’ve been working with litigators and regulators for more than a decade, solving these problems, and we know that when organizations, even if they’ve been breached, are able to demonstrate what we’re about to demonstrate to you, the regulators and litigators leave them alone after the breach. They say you’ve done something reasonable.

Law has nothing to do with this, and they walk away. Right? You can see this in something called the CIS Risk Assessment Method, where we took our intellectual property, something we developed for our clients, and gave it away to the Center for Internet Security. Step-by-step instructions on how to do the risk analysis, right?

Duty of Care Risk Analysis, we have the DoCRA Council that maintains the DoCRA standard and Reasonable Risk, the Software as a Service (SaaS) that automates a lot of what I’m about to show you. Alright.

Okay. Again, risk equals impact times likelihood. We’re not talking about maturity assessments. We’re not talking about compliance assessments.

It’s risk analysis that we’re talking about. This is a picture of a risk register, as you’ll see in the CIS RAM. How many of you use the CIS risk assessment method? CIS RAM.

Okay. Good. Yeah. Thank you because we put hundreds of hours into developing this thing and then just gave it away, and the more people I see with their hands up, the happier I am.

But you might recognize this spreadsheet. It’s basically a list that says, if I have these controls, I should be applying. How am I doing that now? And what is actually causing problems with my peers?

And so am I prepared for those problems? What would be the impact on me and on the public if something were to go wrong, and is this acceptable to me and to other people?

If it’s not acceptable to me and other people, what am I going to do that does not create more of a burden for me than the risk I’m reducing for others? That’s the magic thing there. I can’t have a cure that’s worse than the disease. If my safeguards create more of a burden for me than the risk I’ve reduced, then that’s not reasonable.

Remember, every regulation is asking for risk analysis and reasonability. So this free tool is freely available for the world to use, and there have been one hundred and seventy thousand downloads. I know we’ve got an older slide near this is one hundred and forty thousand. But people are able to actually do this analysis for free. So you can get that right away.

But here’s a basic picture of what you’re doing. If we’re talking about likelihood and impact, then we want to be thoughtful about how we’re defining impact. We want to actually talk about our impact, our objectives, which might be profitability. Here we have an impact table that goes from one to five, but we’re to have three different impacts. Impact on our objectives as a business, which might be profitability, typically is. Our mission impact, this is for say a device manufacturer that has a health monitor, one I’m wearing right now.

But the health monitor actually shares my data with my peers to say, hey, we’re noticing that when old guys like you stand up more and exercise more and get more sleep, your health goes up. Why don’t you try that? Well, means I’m giving my information to them.

So I get a user health benefit from that.

I’m happy to trade that information for them, and I know the information is stored in the EU and they’ve got really good privacy controls. But if they were to screw up, then they could have an obligation impact.

So we’re thinking about all those things in the risk analysis, likelihood times each of these things. How are we doing? How’s the world doing? Right?

And then we can draw a line between what’s acceptable and not because we can say, hey, what if we go, what if one and two are fine? You would never need to fix it. Alright.

How about three? You cross a line where you now have to fix it. You have to fix your mission. You have to fix your objectives.

You have to fix your obligations. We now have a clean line to delineate between what’s an acceptable and an unacceptable risk because we thought through what we would accept and what the public would accept. This is gonna make it super easy now whenever you talk about risk to your executives to say, This is why it means something to us. It means something to our profitability.

It means something to our mission as an organization. It means something to the potential liabilities we have if we screw up.

We want our customers to know we’ve taken care of them.

If you’re in a public company, this gives you a distinction between what’s not material and what’s material.

So if you’ve ever worked on a ten k and you see that line, go, oh, holy cow. I didn’t realize it was that simple. It’s that simple. If you’re not in a public company or working on a ten k, you’ll say, Alright. Thank you. But that’s for those people.

And then you can draw a definition for what is acceptable when you combine that with likelihood.

In CIS RAM, you can actually then also tie your mitigation plan to when you’re gonna do it and how much it’s gonna cost.

To say, is my annual budget reasonable given the amount of risk that I have?

That’s a pretty great thing to have for free but again, CIS RAM is freely downloadable. That’s what it’s there for.

If you’re using a commercial product like Reasonable Risk, you know you need something more robust than an Excel workbook. But this allows everyone to participate. Everyone can say, The project I’m on is gonna take this much time, it’s gonna reduce this much risk, it’s gonna cost this much money. And once you have this stuff in the you know, into a heftier application, you can do some pretty interesting things.

Like, pull that chart together that I showed you before. So what does this chart tell us? If I’m looking at it, I’m gonna back up here a little bit. If I’m looking at this and I just had to, I think I have to be closer to this to do that.

Okay. If I’m looking at this risk register and it goes through two hundred eight rows, and I show that to an executive, they will get very mad at you. They’ll get very frustrated. They might jump out of a window.

It’s a very frustrating thing for them to see. That is the analysis that the technician uses to know whether what they’re doing is okay or not. The executive needs to see things like this. Let’s talk about what this is.

This is a month-by-month breakdown of what our aggregated risk is against our acceptable risk.

 

 

The aggregated risk for September twenty twenty three, the leftmost of these bars, is over fifteen out of twenty five given a five-by-five risk scale. That’s clearly too high. Right? The organization says with this orange line, we’ve got a plan for implementing controls. And as we implement those controls, our risk measurably goes down.

And we can measure that because we’ve got a fixed metric to do that.

But that means that I need to be able to operate on the budget that’s required to bring it down. Now, if I review this with executives and I start to, and they start to see, wait a minute, that purple line, which is our current plan, is still high. Why Is the why is our current performance against our plan worse? Why is our risk still high?

That’s a good question for an executive to ask. Do they have to know about Linux kernels that are hard to update modules for?

No.

Here’s what they need to know. Hey, we’re not finishing projects. Why are you not finishing projects? Well, you keep throwing things at us that are interfering. Has anyone had that experience? You have a cybersecurity requirement that you can’t get done because other things are in your way.

So you can say, I need protected time, or my team needs a hire, or we need that approval on that system that we asked for. But our risk remains high until we can get your support, dear executive. Can the executive make a decision from that?

They can because you told them what’s holding you back. Because you told them what’s holding you back.

In this depiction, this is very common. When our clients are learning how to do risk management, they generally see that the increasing delta between the plan and what’s actually happening. And then they have these tough conversations, and then executives say, You mean if you get approval for this one thing, that a risk goes down? Yeah. Well, well, let’s try that.

Yes.

You say, Okay, great. See you next month. You come back, you got the yes or quarterly or whatever it is, and you come back, and the number went down.

And they said, Wait a minute. Okay. So you showed me the problem, and then I said yes to this thing and then the risk went down.

You go, yeah, that’s what happened. They go, holy cow, I did a cyber. And then they go and they tell all their friends, I know cyber security. My team’s got me figured out.

I can make a decision just like that, and the risk goes down. Look at me, I’m a cyber. That gets executives very excited. That’s where you want to get them.

They just need to know what information you can provide me that helps me make a decision.

What does this look like? It looks like every other decision they make in their job, and that’s what governance is. It’s when an organization is managing cybersecurity the way they manage any other part of their business, then that’s governance. That’s what the goal is.

And the goal isn’t for you to take that burden on yourself, it’s to help executives know what their part of this is.

Here’s another chart. This also comes from Reasonable Risk.

This is an accumulated risk metric. And this is one of those things that’s very powerful when you see it in practice. Here’s what it means. Again, over time, the earliest being all the way to the left, we did a risk analysis, and we had a lot of high risks. That’s what the red is.

Yellow is unacceptable but not high, and then green is low risk.

If you’re tracking your low risks, you’re not trying to say, well, risk is resolved, take it off the chart. No, you say, I’m gonna remember the acceptable risks for a couple of reasons. One, I wanna know why I believe because I’m posing an acceptable risk to the public and to my business. You record that. What are you saying here? You’re saying that over time, we keep finding high risks, but we’re able to resolve them.

We stack up more and more green because our red and our yellow get smaller, Right?

That’s a pretty cool thing to be able to tell an executive because they’re able to say, as a team, we’re grooving. We’re looking, we’re fixing. That’s what we need to be doing in cybersecurity. Okay. I’m gonna ask for a volunteer.

It’s a different chart. Same chart but different company.

What does this one say, especially in the past few months?

Yep.

Stagnant. Exactly right. Thank you. It got stagnant because you see that you’re not reducing your red, you’re not reducing your yellow. This is evidence of a program that’s not working. Lost interest, didn’t get support.

But executives need to see that. This isn’t, hey, know, the cybersecurity people aren’t doing their job. It’s like, no, we all have a role. None of us is getting our jobs done. This is stagnant. We are not paying attention to our program.

Can get this down to the technical manager’s rule, where they say, What is the control that I need to pay attention to? I wanna pay attention to the highest risk controls, and I wanna show where they need to go.

So now the technical manager using the same data has information that can drive their behavior. What is my priority that I’ve gotta fix? And here’s one that’s been fascinating. This is a picture of the portion of liability of large companies.

When you have a breach, this orange represents the percentage of claims costs that go toward handling liabilities. The blue is handling the incident and recovery.

This comes from the insurance industry itself, where ninety-two percent of breach costs have to do with liability. If you have a risk assessment method that demonstrates that you were looking at risk to other people and balancing it against your ability to get your job done, the liability portion goes away.

Imagine that your cyber risk disappears by ninety-two percent. That’s what that means.

Just by running governance correctly. Okay.

So who’s using DoCRA? Well, we’ve got that these numbers had to be updated. It’s now more than four hundred HALOCK clients that are using DoCRA. We’ve now had a hundred seventy thousand downloads of the CIS Risk Assessment Method, just the most recent version.

We have got twelve U. S. State attorneys general using it explicitly, saying use DoCRA, that’s what we mean by reasonable. We are working with federal regulators who are doing the same, and fifty state attorneys general say we, forty-nine of the US states, but DCLs give us fifty.

Have you used that in the Marriott case recently? So regulators are starting to learn how to do this more and more.

So what can you do?

Next time you’re doing an assessment, just make it a risk assessment and not a maturity assessment because the maturity assessment gives you a score, but it’s not an outcome.

And you need an outcome-based approach that executives are gonna be engaged in. And again, you can use it for free if you’re using CIS RAM. If you need help with this, if it’s the first time you’ve done it or you have a complex environment, we’re happy to help. If you need an automated approach because there’s no way you’re gonna be able to do all that math and those graphs yourself and do your job, you need a tool like Reasonable Risk that builds this in and lets it operate for you.

So what we encourage you to do is just make sure that you’ve got what you need to get ready for a risk assessment.

It’s the materials. It’s, but it’s also the time.

Get that risk assessment ready to go. You’re gonna need to reserve time in order to get it done, and reserve not just your own time but time from your colleagues to get that done. And you really are going to need to demonstrate. The issue with governance is to make sure that everybody’s participating so you’re gonna need some way to share that information.

But we’re happy to talk about this. I will talk about this till I’m blue in the face. So come see me at the HALOCK and Reasonable Risk tables. Are there any questions? I know we’re running low on time, but happy to take them.

Okay. No questions means one of two things. Yes.

Yep.

Oh, Mike’s coming your way.

It’d be great if his name was actually Mike and he could say, yeah, and I’ve got a Mike.

Mike’s coming your way. Yeah.

Just a quick question. Do you guys just use CIS, or can you use NIST CSF or NIST risk management framework?

Yeah, that’s a great question. Yeah, DoCRA can be used with any control standard at all. So thank you for pointing that out. It’s the balancing test that you’re doing. We’ll do it against regulations, we’ll do it against control standards, any control standard that you’ve got. We’re using it against AI security questions as well.

We released it with CIS RAM because they’re a nonprofit organization, and they were very happy to give something away for free. But that’s a great question, thank you.

All right. We will wrap up. Thank you all for attending. We appreciate it. And come see me at the HALOCK and Reasonable Risk booth. Thanks.

 

Request the presentation deck.

View the Gallery.