By Chris Cronin, ISO 27001 Auditor, Partner
Would you be surprised to learn that there is no HIPAA requirement that tells organizations to use a firewall? How about an intrusion detection system (IDS)? Nope. And no requirements for a data loss prevention tool (DLP) either, or a proxy server, or even a security information and event management system (SIEM).
All too often, clients will request a “HIPAA review of my firewall.” We see devices being marketed with “HIPAA policies,” “GLBA configurations,” or “PCI DSS rules” built in. But to be frank, we are dubious about the validity of these requests or claims. If you read the HIPAA Security Rule, the reason for our doubts becomes clear; the regulation provides very little specificity about which safeguards to implement and how safeguards are to work. HIPAA only requires that controls are reasonable and appropriate, and that you determine what “reasonable and appropriate” means through a risk assessment.
“HIPAA only requires that controls are reasonable and appropriate, and that you determine what ’reasonable and appropriate‘ means through a risk assessment.”
A lot of network and system security devices are purchased in response to regulatory requirements such as HIPAA. Implementing controls to secure your organization is often a good thing to do. But marketing claims by manufacturers that their security tools provide compliance with HIPAA, PCI DSS, GLBA, FISMA, or other regulations and standards, can be misleading. In fact, it is possible to configure them in a way that creates unnecessary risk and liability for the organizations that use them.
To illustrate the point, let’s first think through the basic logic of security regulations:
1. First, consider the harm that may come to others if information is not protected from foreseeable threats.
2. Then, evaluate the harm in a way that can be compared to your burden for protecting the information.
3. Finally, use safeguards that protect people from harm, but that are not overly burdensome to you.
That’s a simple start.
Now let’s apply that regulatory logic to an example security technology; say IDS/IPS at a company called ACME. (“IPS” is what we call an IDS that is configured to block traffic).
1. First, consider the harm that may come to others if information is not protected from a hacker who uses known techniques for grabbing data from ACME’s servers.
2. Then, estimate the likelihood and impact of that scenario at ACME. Suppose the estimate is that all of ACME’s patient or customer data could foreseeably be exposed within the year.
3. Finally, ACME can use an IDS/IPS to detect and prevent those known hacker techniques and it would not be as costly or burdensome as the expected breach.
That may have been a predictable conclusion. But configuring the IDS/IPS, and managing it over time is where the risk and liability starts to swing wildly. Let’s take a look at why that is.
“Configuring the IDS/IPS, and managing it over time is where the risk and liability starts to swing wildly.”
Intrusion detection systems and intrusion prevention systems are basically software that operates on a network. An IDS may be on a computer to detect and block network attacks at that computer, or the IDS/IPS can be on a device that watches everything on a network to detect and block attacks anywhere on that network. An intrusion detection system becomes an intrusion prevention system when it blocks the traffic that causes alarm.
At its most basic, an IDS/IPS has four significant parts;
1. Sensors that watch network traffic,
2. A database of patterns that describe what suspicious network activity looks like,
3. An engine that looks for matches between what the sensors see and the patterns in its database,
4. And an actions capability to block traffic, alert on a match, log the match, or re-direct suspicious network traffic.
There are many security devices that use similar principles. DLP systems look for text that matches sensitive data to prevent it from moving where it shouldn’t go. Advanced Malware Protection systems have much more complex engines that allow themselves to be attacked by malware to determine how to respond. Proxy servers and firewalls monitor for network usage, but they use similar principles for detecting, testing, and taking action on suspicious behavior.
“If those technicians make modifications to rules for the kinds of traffic that gets ignored, alerted on, stopped, or permitted to move freely, that technician has a lot of power – at the click of a mouse – to increase or decrease their employer’s liability.”
Most of the patterns in the database and optional actions come standard with security devices. But an owner can modify these settings as well to address specific security concerns in their environment, or to accommodate business needs. These decisions to modify the rules and capabilities for security devices are in the hands of technicians. If those technicians make modifications to rules for the kinds of traffic that gets ignored, alerted on, stopped, or permitted to move freely, that technician has a lot of power – at the click of a mouse – to increase or decrease their employer’s liability. So administrators should act in concert with management when they administrate these security devices.
Collaboration between technicians and management is a classic problem in business. But without it, technicians have the ability to make decisions about liability on their own. Similarly, management can make declarations about the level of risk they will tolerate without being able to adequately translate that into practical configuration-speak.
From practical experience, most readers will know that security devices are not perfectly tuned to business needs (and by default). And many of us have been in situations where valid business practices have been blocked or slowed by security controls. For example, many encryption methods, multi-factor authentication schemes, remote access gateways, and passwords-protected mobile devices slow down business. As intrusive as some of these security controls have been, they are not nearly as intrusive as they could be. Most security teams provide less security than they might because business will not tolerate more interference than they believe is necessary.
“Neither judges, nor regulators determine liability or compliance based on the dollar value of previous breaches.”
Sometimes technicians find themselves overcompensating for business requirements. One of the most common fears that a technician has is the cursed “false positive.” This is a situation where a security system believes it sees something nefarious and treats it as a bad thing. But if the security system is wrong, it could be stopping valid business functions. So technicians, their managers, even business managers choose to “fail on the side of business” and permit some risky activities to occur to reduce the risk to business.
Regulations, duty of care balance tests performed by judges, and information security standards all allow for risk analysis to strike the right balance between security and business, but there better be a documented, consistent, rational way to make those decisions. And business management had better be aware of those decisions.
If a technician is making changes to something as sensitive and powerful as an IDS, DLP, firewall, or proxy server, then their decisions can sway risk and liability one way or another. They should involve executive management in their change management process to think through the risk and make documented decisions about whether to accept the risk of a change, to not accept the change if the risk seems intolerable or impossible to mitigate, or to find some way to mitigate the risk of the change.
This advice may seem at first to be impractical. How could a non-technical business manager have the information they need to make such a decision? Well-crafted risk assessment criteria provide organizations with the right way to communicate about risk. If impact scores are clearly defined in terms of potential harm that can come to the organization, the public, or other interested parties, then technicians and non-technicians alike can estimate the potential damage that a breach, or an overly aggressive control can create.
Just a quick note on the role risk analysis plays in managing liability and compliance. There is a lot of talk among cybersecurity professionals of using dollar-per-record calculators to think through the potential cost of a breach. These are useful numbers to consider, but neither judges, nor regulators determine liability or compliance based on the dollar value of previous breaches. Regulators and judges talk about negligence and duty of care in terms of balance. Did a breached organization think through the harm that could come to others? Was the burden of their controls no greater than the harm that could come to others? Your risk assessments must ask questions of harm to others and burden to yourself. Otherwise, there is no way to manage risk well, or justify your decisions to authorities.
So when you next need to know whether your security devices are “HIPAA compliant,” or conforming to some other regulation or standard, remember that there is no one way to verify that or check a box. Regulations are not that specific, and for good reason. Regulators are required to avoid specifying controls that may be too burdensome to some. So regulations instead require that you must show that you thought through foreseeable risks to others and to yourself as you configured those security devices, systems, and applications.
HALOCK’s governance team is expert at establishing risk assessment criteria, and coaching organizations through managing security devices and processes using risk-based analysis. You can find out more about our process by viewing HALOCK’s “Best Guide to HIPAA Ever” and our presentation on the “Kaizen of Risk Management.” Still have questions? Contact us – we’re here to help!