Darrell Issa’s House Committee on Oversight and Government Reform has been busy looking into the security of the healthcare.gov website and its connected systems.
If you watched the hearing on CSPAN on January 16th, you would have seen the House committee questioning the CISOs of the Department of Health and Human Services and the Centers for Medicare and Medicaid Services about healthcare.gov security.
What transpired in that hearing was painful to watch. The CISOs were unable to satisfactorily answer the (relatively) competent questions coming from the committee members about how secure the healthcare.gov system was.
The awkward, probing and frustrated back-and-forth was essentially the product of a clash of cultures. The committee members and their guests had very different mental models for what “information security” and “compliance” mean. This frustrated the committee members to no end. Considering that the committee is comprised of non-technical lay people who need to exercise oversight, their questions were good ones; “Is healthcare.gov protected from hacking?” But they asked CISOs who are prepared to answer a different question, answering “We validated that the systems met and sometimes exceeded standards of security.”
If you have not seen this hearing, take the time to watch it. It’s an extraordinarily well documented example of the failure of security people to communicate effectively with business executives.
While the CISOs may have been technically correct in their answers – that the systems met or exceeded standards outlined in NIST 800-53 – they did not understand how the nature of the question was a mismatch for the nature of the answer. In fact, the CISO’s answers sounded like obfuscation, even if they were meant to be transparent.
Government CISOs experience security as a set of controls, processes, standards and certifications. Their goal for a system is to get it to an Authorization to Operate (ATO) status so it will be allowed to run. Their way of achieving the goal is to ensure that a system is prepared and validated as having met certain standards of security.
Many of us know, however, that there is a difference between security and compliance. The CISOs were able to describe compliance fluently. The committee members needed to know about security.
During those hearings, and in the week that followed, Issa became aware of a known critical weakness with one of those NIST standards, namely, the risk assessment standard NIST Special Publication 800-30. While this is a very useful standard for helping organizations think through relative risk by considering “impacts” and “likelihoods” it has a few flaws that can be well addressed with some sophisticated governance.
After calculating information risks, the standard states, consider the security controls that you should implement to address the risks, and have an officer (a designated approving authority) sign off on the residual risk you would face with the proposed control.
We are learning that Issa is frustrated to hear that a well-placed manager can state their acceptance of an information security risk, at their own judgment, and move forward; thus reaching a state of “compliance” but maybe not security.
Darrell Issa is a dogged overseer. Say what you want about his choice in things to oversee, but he’s onto something critical right now. If he were to next meet with Ron Ross, the NIST Fellow who heads up the authorship of the Special Publications on risk management, he may get to the bottom of the problem that has plagued us for some time: How do you make sure that compliance means security?
HALOCK solved this problem years ago. We use NIST security standards for our clients regularly (but not exclusively. We expertly provide PCI and ISO 27000 work as well). But we need to help our clients demonstrate both compliance and security. We resolve this one weakness by ensuring that risk is calculated by impacts and likelihoods that align with specific business criteria.
In other words, if risk = impact x likelihood, make sure that the impact score grades are based on the business mission to secure information, to secure systems, to secure facilities, to operate as a going concern, to fulfill its reason for being. Maybe an impact score of ‘1’ means “no impact to business growth” and “no impact to customer confidentiality.” Maybe ‘3’ means “significant impact against the growth plan” and “a potential exposure of customer email addresses.” And maybe ‘5’ would mean “we would go out of business” and “massive loss of full PII data sets.”
Likelihood scores can similarly be defined in terms of the business. Business plan thresholds and time-based security calculations can provide an organization with their minimum and maximum likelihood values, for instance. Or they could use probability models, like Monte Carlo modeling. But give the risk assessment participants standardized, business related guidance for selecting their likelihood and impact scores.
Then, establish risk acceptance criteria with the executive team, asking, “What frequency, of what impact, would we invest against? What frequency, of what impact, would we not invest against?”
The answer to these questions is your risk acceptance level; decided by executives, based on business mission and obligations to secure assets. This acceptance level, consistently applied, allows your organization to determine when an identified risk is acceptable or not.
The way this serves security is that it elevates the understanding of security impact to business, and forces them to invest in security controls that meet the organization’s actual objectives.
This is not to say that well-designed risk assessments are all you need to secure your systems … not by a long shot. But if organizations ensure that their risk acceptance criteria is standardized for all of their risks, and based on their mission and their responsibilities, then we increase the likelihood that risks will be aligned with security, and not an arbitrary, undefendable judgment.
If Darrell Issa and Ron Ross sit to discuss this flaw in Ross’ risk assessment process (“sit to discuss” may be more “grill, grill grill, sweat, sweat, sweat”), then I hope they look for incremental improvement of this useful risk assessment standard. Requiring that organizations only accept risks that are based on their mission and responsibilities can be a simple fix with a significant payoff.