Information security laws and regulations are telling us to conduct risk assessments before we develop our security and compliance programs. They insist on this so our security goals are meaningful to each of us, rather than aspiring to a generic list of controls that were written by experts who never met us and don’t understand our businesses. But at HALOCK we are seeing risk assessments that may be increasing the risks of the organizations that perform them. One very common risk assessment mistake is defining risk impacts by purely selfish criteria, such as bottom-line profits or executive compensation.
The information risk assessment methods most commonly described by laws, regulations and standards are consistent with ISO 27005 and NIST SP 800-30. What both of these methods have in common is that risks are calculated by multiplying an ‘impact’ score by a ‘likelihood’ score to determine the risk score. The logic of the process is to prompt us to pose risk statements like this: “I expect that threat ‘x’ could compromise the security of asset ‘y’, creating an impact of ‘a’ as frequently as ‘b’.”
But there is additional business value in ISO 27005 and NIST SP 800-30 because they also both require that ‘impacts’ should be defined in terms of the organization’s mission or business. This alignment of risk with an organization’s mission improves the nature of the above risk statement so that it is more meaningful. The risk statement would be modified to say, “… creating an impact to our mission of ‘a’ as frequently as ‘b.’” Now when an organization plans its investments in security safeguards, they know whether those controls are reasonable and appropriate in light of their mission.
But a selfishly defined mission could increase your liabilities. Let’s compare two risk statements that define impacts differently.
Scenario A: A breach of our client database is likely to happen once per two years and could lead to a loss of 5% of our clients, which would decrease our revenues.
Scenario B: A breach of our client database is likely to happen once per two years and could lead to an exposure of our clients’ personal information. They could suffer costly identity theft as a consequence and they may move their business to a competitor.
Now imagine that a breach occurs for your organization after you had defined your impacts similar to Scenario A. A state’s attorney general would investigate to see if you applied due care over this breached information. If your one impact criterion was to ensure that your revenues didn’t go down, then presumably your investments in security safeguards to reduce that risk were also based on your potential lost revenue. That just looks bad. It’s tantamount to your organization saying, “We’ll protect our customers’ interests up to the point that it hurts our bottom line.” Ask your attorney how comfortable they would be defending that position. I imagine not very comfortable at all.
Scenario B represents an organization that perceives its customers’ privacy and dignity as something they would invest in to protect. Of course, losing those clients is also a consideration. And again, your investments in security safeguards to reduce those risks would be based on preventing those impacts. Now ask your attorney if they would prefer to defend your standard of care based on your customers’ privacy and your business relationship, or on your revenue alone, and – I’ll bet I’m right – they’ll tell you Scenario B gives them a much better position from which to defend you.
Information security risk assessments are a fairly straight-forward process, but they should be designed and executed in a way that reflects an organization as one that understands its ethical responsibilities to protect the interests of their constituency. Asking yourself whether your impact definitions are based purely on selfish goals is a critical step toward taking this ethical stance.