Presented at RIMS RiskWorld 2022

In post-data breach litigation, you must demonstrate due care and reasonable control. Learn what basic questions the court will ask and how the duty of care risk assessment (DoCRA)—based on judicial balancing tests and regulatory definitions of reasonable risk—helps you answer them. Distinguish the risk assessment criteria that allow for comparison, reflect your organization’s values and hold up to public scrutiny. See how you can employ DoCRA to fulfill regulators’ requirements for a complete and thorough risk assessment following a data breach, with a valuable perspective for cyber insurance. Understand how to define ‘reasonable security’ through examples from ‘whistleblower’ movies and their risk management process.

PRESENTER: Chris Cronin, ISO 27001 Auditor | Board Chair – The DoCRA Council | Partner – HALOCK Security Labs

View the recording of this insightful session.

 

TRANSCRIPT OF “The Questions a Judge Will Ask You After a Data Breach”

Hi. I’m Chris Cronin, and I’m chair of the DoCRA Council and also a partner at HALOCK Security Labs. In the next twenty minutes, I’m going to convince you that we can reduce cyber liabilities claims by more than half and therefore reducing the cost of cyber breaches by more than half. And we’re going to use it by using the word reasonable like an expert. So also in the next twenty minutes, I’m going to show you a concise definition of reasonable that’s so convincing that you’ll be able to win any argument with a regulator or litigator or a judge who asks you whether your cybersecurity controls were reasonable.

We’re doing this because more than half of data breach claims costs can be contained easily when we reduce, when we reduce them using the word reasonable. If we can convincingly show reasonable security, our liability goes down. So how do we know this? Well, my team and I act as expert witnesses for regulators and litigators. We help them determine whether a breached organization was reasonable when they were breached.

You, just because you were the victim of a breach does not mean that you were acting unreasonably. And if someone is acting reasonably when they’re breached, well, their liabilities go down, which makes us very popular with our clients who have us come in and help them develop cybersecurity risk management programs that are measurable, that help executives make informed decisions about priorities and investments because we know what’s causing liabilities to go up and down.

So having done many cases and spoken and worked with regulators and litigators, and we’ll get more into that, we know what causes the liabilities to go up. We know what caused them to go down. So, in the next twenty minutes, we’re going to focus on what causes, causes them to go down so that you can take advantage of those yourself.

So, if you take a look at some data, this comes from an annual report by NetDiligence. It’s the NetDiligence Cyberclaim Study. This is the 2019 study. The 2020 study came out after. We’ll show you what that shows. What that does is looks retroactively over the past five years at claims costs that insurance carriers pay out after breaches.

And what they show is that they’ve got two buckets. There’s the amount of money that covers the initial response, and then there’s the bucket that covers fines and settlements that come later when the regulators and the lawyers get involved. And in both cases, they find that the the cost that go to initial response are in the minority, whether you’re in, a small and medium sized organization or a large organization.

And whether you’re small or large, more than half of those claims costs and breach costs are coming from litigation and regulatory fines.

Now here’s the irony.

The hardest item here to prevent is the smallest portion of the payment.

The easiest to prevent is the largest portion.

So, if you can do something easy to reduce the claims cost or the breach costs, you’d focus on it. And that’s what we’re focused on today during this presentation.

Now in 2020, NetDiligence evaluated the data a little differently. But what they showed was that the liabilities’ costs doubled their previous five-year trend.

So, it’s just getting worse. So, the better you have mastery of something this simple to take care of that much of a liability, the better off you will be.

So where does liability come from? Well, it comes from attorneys. It comes from lawyers. They set the price on liability through negotiation.

It’s plaintiffs and defendants, negotiating in a lawsuit.

It’s regulators and your attorneys negotiating, when regulators get involved in their investigations.

But lawyers are setting the price on that. So, it’s important to understand what lawyers are looking for to demonstrate reasonableness, which is the opposite of liability.

And once we do that, we can get rid of that big blue portion of claims costs, which is our target here.

So how do we get lawyers to agree on the word reasonable? Well, most folks would say that’s going to be hard to do. They make their living by disagreeing. But you’ll see that they all have something at stake with a clear definition, and we’ll talk about what that is and how some of the cases we’ve worked on have brought liability down to zero by people just agreeing on the word reasonable. But first, let’s go to the movies.

So, in the 1990s and since, there’s been a fun trend of whistleblower movies. These are movies where a corporation does harm and the little guy, and a small attorney will go against that corporation and win. But they all have the same trajectory. It starts with corporate harm.

Some big corporations are doing something that’s hurting people. And if you drive in their cars or if you live near the factories, if your firm is near the factory, you’re going to get hurt. But you’re the little guy against the big corporation.

Then some intrepid attorney, usually from a small law firm, just digs through archives of documents to see if he can find something that can demonstrate that the corporation knew they were doing harm and we’re covering it up.

And then the moment happens, the smoking gun. Right? That moment in which the attorney says, that’s it. This is the document where the company says we are protecting our profits. We’re not reducing risk to people we can harm.

And then there’s the gotcha.

The plaintiff’s attorneys present the document to the defense attorneys, and the big powerful corporate defense attorneys look at that memo and they say, what have our clients done? They put in writing that they were looking out for their profits and not out for other people.

And then justice is served. Right? This is this is one of those great Mark Ruffalo moments where he says, how could you? How could you? How could you let harm come to a person for you for the sake of your own profits?

And we should remember what that feeling is like. Right? When we’re in the audience and we see that happen, we feel that justice is being served because we don’t feel good about organizations that are looking out for their profit at the cost of other people. So now here comes the question. What is the smoking gun in a data breach case? Well, you’ll know when the regulator or the plaintiff’s attorney says this, please produce your risk assessment.

Now if you’re like me, your heart skips a beat when you hear that. Maybe your knees feel a little weak, your palms might be sweaty, but this is a moment at which you need to own up. I knew there were some things that were wrong in my organization.

Right? And you might feel like these poor folks and Erin Brockovich did. These are the defense team, if you don’t recall the movie. Now, so what do we mean by a cybersecurity risk assessment?

Well, there are a lot of methods out there. But the leftmost method here is the one that you see, most common. There’s some concept of current controls over assets. How are the assets being protected by some level of current controls?

And then what are the vulnerabilities that we face the controls are being used? And then what threats could take advantage of weak vulnerabilities to attack the assets?

And then what are the likelihoods and impacts of that attack? Then we evaluate the risk, and then we apply a safeguard if the risk is too high. Right? That’s a very common model.

Maturity assessments and heat maps, you can use those in a risk assessment, but that’s not the risk assessment. So, if you’re doing those and you think you’re doing a risk assessment, you’re very likely not.

Why is it important to use the full format? Well, the full format’s important because the cybersecurity industry says that’s what a risk assessment is. And if you’re gonna demonstrate that you were using due care, you were you were being reasonable, you’ll use the methods that the cybersecurity community brings to us. But here’s something very, very important for you to understand.

Judges use a similar analysis to determine liability and negligence.

It’s called the multifactor balancing test, and most states use those. But the idea of a multifactor balancing test is to do a risk analysis, to say, given the likelihood of harm and given what, you know, the organization was trying to do and given the, given the the benefit that we were facing, and given the weight of a safeguard to reduce that risk, was a person, negligent? Right? And if you are negligent, then your liability goes up. So, judges are using a test very similar to this.

So, here’s another really good reason to do your risk assessment the way a judge does the multi factor balancing test to determine negligence. Right? Regulators call this risk analysis.

If you were to put a maturity assessment or a heat map in front of a regulator or a judge, they would not know what you’re talking about. I got three points, two out of five. What does that mean? I decided not to go to three points three. Why?

Judge doesn’t understand that conversation.

So, what do inquiring lawyers often receive when they ask for a risk assessment? They get no risk assessment at all, or they get a damning risk assessment, or a duty of care risk assessment. So, let’s look at what we mean by those.

No risk assessment.

The organization that was briefed says, well, we didn’t ever really do one. We’ve got a maturity assessment. We’ve got a heat map. We’ve got audits. Well, it shows that they didn’t follow regulations and standards because regulations and standards tell you to do your risk assessment. You do a risk analysis.

And there’s also no evidence that they were looking out for other people in their program. If you haven’t done a risk assessment, then you can’t show that the reason why you decided to defer a firewall versus a malware prevention tech, was because it was the best thing for you and your customers combined, or your customers alone. So, if you don’t have a risk assessment, and you’re trying to figure out where your liability is, just fill in all the blue. You’ve got it all.

You’ve got the incident cost in the orange, and you’ve got the liability cost. Just fill it all in. Total liability. So why do why do other people matter in a risk assessment?

Just about every risk assessment that we see in practice, people just say something like high, medium, low, red, yellow, green, or here’s what it means to our profitability. But other people matter in a cybersecurity risk program because you have their data.

They’re the ones who can be harmed. You need to think through the kinds of harm that they can go through, or you are doing something else that can hurt people if it isn’t a data, if it’s a functionality or a service issue. You can hurt them. They need to be in your risk assessment.

You need to know that you’re making decisions that take care of them appropriately.

And because the systems that and services that you must help them can hurt them. We’ll talk about help and hurt in a moment.

But regulators’ jobs are to protect the public, not you. If you have a breach, the regulator doesn’t come after you because you hurt yourself.

Right? No organization ever sued itself for having caused itself harm. So, the plaintiff and defendant are never the same. You need to demonstrate that you were taking care of other people because the regulators and the plaintiffs are looking out for the people who you hurt.

So, if you’re going to have a convincing argument you were being reasonable, you need evidence that you thought about them. So, we’re going to keep talking about this need to think about other people. Here’s a second kind of risk assessment that regulators and plaintiffs get, a damning risk assessment. Well, what’s that?

You could also call it an incriminating risk assessment.

But that one asks those same questions we had above. What would a data breach, how would a data breach harm our profits? How would it harm our reputation? What fines and settlements would we pay? What should we invest to reduce risk to our profits and our reputation?

And that’s that smoking gun language. Right? What are we doing to protect us?

If it doesn’t say what harm will come to others or what safeguards prevent harm to others, and we’re only protecting ourselves, that’s damning. That’s an incriminating risk assessment.

Here’s what they look like in practice. We talked about those components of risk analysis before. So, we talk about the controls and the burden of the controls that preserve assets. We look at vulnerabilities, where we’re weak, threats, we’re concerned about, then likelihood and impact. But for impact, we say, what is the dollar loss of profits that we get from response? And what are the dollar losses of profits we get from fines and settlements?

And then how do we evaluate the percentage of those dollars lost and make sure that our safeguards come in less expensive than our weighted losses? And that’s the smoking gun.

That’s where the litigators and regulators say you are investing to protect you.

Why do they think that? Well, because that’s where you said we’re investing in it, to protect us, not others, right? Fill all that blue in.

Fill all that liability in. You’re going to have the orange response costs and you’re going to have the liability costs, because you were showing I was taking care of me, I wasn’t taking care of the people who the regulators and the plaintiffs are here to represent.

Right? We are protecting our profits. We’re not reducing risk to people we can harm. That’s what that reads like. Be aware.

So, now I’m here to give you good news. There is a definition for reasonable, because in 2021 a miracle happened.

The Sedona Conference, is an organization that brings people together who are normally rivals in a difficult technical area of the law, brings them together to find common ground about how to at least have a common understanding of what meanings of words and terms are and what good processes are for difficult things like eDiscovery and privacy and security and intellectual property.

It’s a fantastic organization because it’s taking people who you normally think of as people who make their living being rivals to collaborate to make their profession better.

And they tackled this concept of reasonable security publishing something in 2021.

I want you to pause when I come to a new slide because I want you to see the people here. I want you to look up the names James Pizzarusso and Chambers and Douglas Meal and Chambers. It’s a listing of the rankings of litigators.

And you’re going to see that James Pizzarusso and Douglas Meals are both in the top three. Douglas Meals, number one, The only person in the top-level litigator, for, for data breach and cybersecurity litigation.

Now these two guys go against each other a lot in litigation, and they decided to come together to come up with a definition for what the word reasonable was. So, they’re working really hard to solve this reasonable problem. But when you get such leaders of the, of the profession together to solve a problem like this, you can bank on the fact that you’ve got a really good definition that you can count on. But they’re not the only ones. David Cohen and Douglas Meal, look them up with look up their names for FTC LabMD.

You’ll see that they were litigating. They were at Ropes and Gray at the time. They were litigating in response to FTC’s enforcement acts against LabMD.

Now, Jim Trilling was also at Federal Trade Commission at the time. Now these guys are on opposite sides of a case because Doug and Dave are focused on defense.

Jim is focused on regulatory oversight and enforcement.

And they came together to say, let’s figure this out. We’ve got to have a good definition of the word reasonable. You’ll see famously if you look this up that the LabMD case against FTC against FTC broke the word reasonable for the FTC. The eleventh district court said the FTC can’t use the word unless they have a definition.

And then Jim and David and Doug got together to say let’s be among the people who fix this.

Also look at Bill Sampson’s name, William Sampson, Shook Hardy Bacon Law three sixty. You’ll see Shook Hardy Bacon in Law three sixty was recently listed as the top litigator for cybersecurity. I mean, these are the industry leaders. And to prove the point, I did it too.

So, when I say, you know, we helped write the rules for that, we did. We were right in the scrum. So, we’ve been working carefully to make sure that our clients and the public have a really good definition of what this word means.

So, what does the test say? Well, if you’re an attorney not a physicist, you’ll understand what this what this calculation means. It’s a variation on something called the learned hand rule or the calculus for negligence. But it basically says, if you’re using safeguards that are no longer burdensome to you than the risk is to others, then you’re doing something reasonable.

Post breach, there’s a slight variation in how the definition works. If the incremental burden is not greater than the incremental risk benefit, then it’s a reasonable control. But this is basically what this says.

But this is also a clue to that very concise definition for reasonable that you should take with you. Use safeguards that are no more burdensome to you than the risk is to others.

Now you understand right away why that’s going to, be really welcoming in business. You have a conversation with executives and say, look, I’m making this investment, but my risk analysis shows that it’s no more burdensome to us than the risk is to others. And if we are too far afield of that, we’re going to have a hard time explaining ourselves. Right?

So, we’ve seen the state of Pennsylvania. It’s true with other states, but the state of Pennsylvania has put in court filings to show. It’s starting to make use of this test. If you look at the last of these, Commonwealth of Pennsylvania versus Hannah Anderson, you’re going to see that they’ve been using this test, to help define what reasonable is.

So, there’s definitely uptake here. And the Federal Trade Commission recently, in their Log4j guidance, started to get explicit about this concept of harm to others. Right? It’s critical that companies and their vendors rely on Log four j ActNow in order to reduce the likelihood of harm to consumers.

But what they’re saying there is that your risk analysis must look at likelihood of harm to consumers. They’re also saying, look, we understand everyone’s got this problem right now. We just need to see that you’re actually doing your risk management toward taking care of that, but you’re paying attention to the likelihood of harm to consumers. Right?

So, the third kind of risk assessment the litigator gets when they ask to show us your risk assessment is this duty of care risk assessment.

The duty of care risk analysis is basically saying, I’m going to do the same thing ever everyone else was doing in risk analysis, regardless of whether it’s quantitative or qualitative.

I’m just going to make sure that I’ve looked at everybody’s concerns when I think about impact. My impact, someone else’s impact, the impact to the utility, the purpose, the mission for doing the work together that we did.

And we clearly state out in plain language, how would I recognize when, an impact has been negligible or acceptable to me, and how do I equate that to acceptability to someone else? This is just an example of the way this is done. And then you can then have a common definition for what you mean by unacceptable. And again, you can do this, through qualitative or quantitative analysis. Doesn’t matter. But you need to have a common definition of how we would recognize when something’s been acceptable or not. Right?

We talked about DoCRA. Duty of Care Risk Analysis is what that stands for, but it just has three principles. And we’ve applied it to several different methods, but here are these three principles. Your risk analysis must consider the interests of all parties that may be harmed by the risk.

Risks must be reduced to a level that would not require a remedy to any party.

And safeguards must not be more burdensome than the risk they protect against. Right? That’s all the beauty of what we just showed you.

If you can do that, if you can accomplish those things in your risk analysis, and you can, then congratulations, you’re no longer interesting to a lawyer.

So, what do we mean by duty of care risk analysis? Well, on the left, you see, the the method column listing all sorts of risk analysis methods that are out there. CIS RAM is one, ISO 27005, NIST 800-30, and we go on and on. AIE, FAIR, AIE, applied information economics, what’s normally associated with Doug Hubbard’s great work, and FAIR, which does such a great job decomposing what a cybersecurity threat is based on.

All of those take care of the major functions of a normal risk assessment. Right? All those components that we showed earlier on.

The challenge, though, is that if we wanna show due care, we have to add things like providing a standard of care, what controls we are using. ISO and NIST do that. Risk IT does that. We need to estimate the magnitude of harm to other people.

You can do that with ISO 27005 and NIST. And in fact, they tell you to, but most users we see don’t.

And CIS RAM does that quite a bit. But then you can define acceptability and reasonability based on the comparison, the safeguard you’ll apply to the original risk to determine whether the burden is greater than the risk. Right? So, this is what we mean by duty of care. It’s just adding these other questions to an existing good process.

If you’re doing Duty of Care Risk Analysis, whether you use whether you’re using CIS RAM, which is, all well, semi quantitative, or you’re using something that’s qualitative, or you’re using quantitative, if you’re going to be doing Duty of Care Risk Analysis, you do have to find a way to combine qualitative and quantitative methods.

Here’s just a basic illustration. You may be able to draw a line on certain impacts where you have acceptability and unacceptability so that when you do your likelihood and impact analysis, you can determine whether something’s inside or outside of the, of the slope of acceptability.

A word of warning as we talk about quantitative risk analysis is that you never just rely on dollar values, whether you’re talking about harm to others or not. And the reason is that social science shows this. There are really good studies about this, that when you talk about risk purely in terms of the dollar cost against someone else’s physical body, their health, their well-being, their privacy, their security, juries and judges do not react to that well.

It makes sense. It makes the job easier for the actuarial.

It makes it easier to process decision making from an insurance practice. But if you don’t have qualitative statements about the levels of harm that are intolerable to people not in the company, you will get regulators, you will get litigators, you will get juries angry. And when they get angry, what happens to your liability? Does it go down or does it go up? It goes up. Right?

So, let’s take a look at something that we see when we’re in the field. These are two different ways to go to a board of directors and ask for cybersecurity budget.

And you tell me which one is the smoking gun.

Is it the top one where the organization says, hey. Look. We’ve got two kinds of risk loss of profit from breaches, loss of profit from liabilities. The combined loss for this cybersecurity concern of ours is one point one million dollars Can we have one point two five, one and a quarter million dollars to take care of that one point one million?

Now executives are going to say no. It doesn’t make sense. Maybe they will say yes because, you know, we want to make sure we’re doing the right thing.

But this one’s just looking at, you know, harm to us. Now the second one says there’s a risk to us and there’s a risk to consumers.

Quantitatively, that’s a five hundred-thousand-dollar potential loss to us, but fifty percent of our customers risk identity theft through this same breach.

Can we spend one and a quarter million dollars to reduce this risk? Now, even before we heard the executives answer, you should recognize by now which of these is the smoking gun. Right? It’s the one that does not check-in with consumers to be sure that we’re doing right by them.

Now, there have been many occasions in which Haylock, whether we’re acting as expert witnesses or we’re working with our clients, when we tell our clients, you know, in your in your board deck you said you’re only investing to protect your profits. And they say, no, we didn’t. I say, yeah, you did. Look at it again.

And they come up with the same expression, we said what?

So they’re answering one question when they’re just looking at profits. Right? They’re looking at a pure numerical question about how things are going for their profitability.

And that answers the question really well, but they haven’t thought they’re supposed to add other people into that and how difficult things will get when they don’t.

So, if you’re an underwriter, an insurance carrier underwriter, what do you do?

Well, you want to be able to reduce the risk of your portfolio.

Right? And you want to also be able to tell policyholders what they should do to reduce their risk. And if you know that more than half of your claim’s costs are going to come out of liabilities, how do you introduce the concept to them to, to add some kind of duty of care risk analysis to their methods? Well, all you really need to do is tell them to take your risk assessment. First of all, do a risk assessment properly.

But when you do that risk assessment, make sure you look at the impact to you and impact to others, and you evaluate risk based on both. And that you’re reducing risk to the worst case, whether the worst case represents something that could happen to the public or to you.

But if your safeguards are being compared the cost and the burden of your safeguards are being compared to others, this is what’s going to really help, help the risk of your portfolio and help the risk to your to your policyholders.

Here’s a quick question you can ask when you’re in the process of underwriting.

Do your cyber risk decisions distinctly address reasonable risks to you and to others?

And you can look at a risk register to make the determination.

If the answer is yes, they pose a very low risk to your portfolio because more than half of the expected cost to come from a breach will be very low if they’re there at all.

And if they answer no, they don’t, you know how to fill in the whole liability enchilada, the whole claims costs and the liability that comes after.

If you’re a risk manager, what do you do? Well, again, do your risk assessment the way the regulations and standards tell you to do.

But if you’re doing something like, the ISO 27005, NIST 800-30, any of the, any of the others listed here, use those. There’s really good analysis you can get from those, and you shouldn’t stop doing it.

But make sure you’re also adding these other concepts for for, duty of care to others. The idea of harm to others. Look at that distinctly. It’s its own value. Because you need to be sure that if it’s higher than anything else, you need to make sure that it’s low. Right? And and also making sure that you figure out what acceptable is by having a clear definition.

The DOCRA standard gives some advice on this. But also, on the reasonableness of a safeguard by comparing it to the actual risk.

If you download the DoCRA standard at docker dot org, this will give you the principles and practices that you can apply to any method. If you download CIS RAM at https://learn.cisecurity.org/cis-ram, you can, you can get a really good step by step process for designing, implementing, and operating a risk assessment. And there’s really good data that that, that’s in some work tools that help you make your estimations pretty efficient.

So, I promised you inside this twenty minutes, I would show you a definition that is extremely concise and that will convince any regulator or judge or litigator that you’re right about reasonable. And it’s this phrase, to use safeguards that are no more burdensome to you than the risk is to others.

If you have any questions or requests, you can feel free to reach me at either of these email addresses. I do look forward to hearing from you.