Cybersecurity audits mean nothing to hackers. And in fact, neither do short-sighted privacy regulations. Hackers have been showing us this for years. And not just because they find ways to exploit systems before you have a chance to lock them down. It’s more than that. Hackers find value in your systems and data that you don’t think are interesting enough to protect.
On November 10, 2015, prosecutors announced indictments against hackers who perpetrated the JPMorgan Chase & Co breach last year that grabbed millions of names and addresses from JPMorgan databases. For those of us who followed the news of the breach, it was apparent that what the hackers grabbed – a huge dataset of names, addresses and email addresses – was not what we normally consider to be protected by statute or regulations. This attack looked at first glance like a “we’ll take what we can get” attack, but experienced risk managers and incident response experts knew better; hackers have been intentionally going after systems and data that you don’t protect because you don’t know how valuable it actually is.
In the case of the JPMorgan Chase breach, the hackers needed a large list of investors who might get suckered into a pump-and-dump scheme, which was their actual end game. The real money for the hackers was to be made by provoking a mass quantity of purchases of valueless stock, so they needed a mass quantity of investor’s email addresses. And why would JPMorgan Chase not safeguard that contact information, given their quarter-billion dollar security budget? Well, that information is not legally classified in the U.S. as “Personally Identifying Information” nor by itself as “personal financial information.” There were no account numbers, Social Security numbers, or any of the identifiers we normally associate with regulatory protections in that data set. It’s easy to see how that information could be classified as “less risky” thus requiring less security attention.
In the U.S., auditors, regulators, even attorneys are generally supportive of contact information being protected with less care, because, after all, isn’t this the kind of thing you find in a phone book anyway? Sure. Perhaps. Maybe. But the kicker isn’t that each name and address is not PII, as defined, but that the set of records at JPMorgan Chase & Co is hugely valuable because it means something to someone who wants to use it for a bad purpose. For the most part, however, the cyber security agendas, budgets, and priorities go to what auditors and regulators tell business to focus on. If the regulators and auditors are not ahead of the hackers’ game (and they’re not) then we are following the wrong leader as we plan our security strategies.
This is why we must assess risk, rather than relying on audits or “gap assessments.” We have to constantly ask, “What bad thing can come of this information if someone goes after it?” Good risk assessments force us to think like bad people, or bad weather, or bad machines, or bad geological events.
Over the past several years, we’ve found risk assessments going well outside of the standard, straight-and-narrow questions about whether a control is in place or not, or how an auditor feels about a certain safeguard. Risk assessments have been getting organizations to think about vendor lists with contact names, transactional data, and contract terms as types of information that can be used badly against companies if they get into the wrong hands. Moreover, business managers, especially those with limited budgets (read: everyone), love when risk assessments give them a sound argument against auditors by showing with evidence, “we don’t need to make that security investment, because there is no foreseeable risk that it would protect against,” or “because that safeguard creates too great of a burden, so let’s find an alternative.”
Think of the retailers whose credit card breaches have famously happened under the certification of their PCI DSS compliance. Even well-qualified and conscientious QSA’s are limited in their capability to help secure an environment if they are focused on satisfying a list of required controls, rather than thinking through the actual risks (PCI DSS requires companies to do this, but risk assessment remains an unfamiliar skill).
The JPMorgan Chase breach, and the facts that were revealed in the indictments, showed us that if we adhere exclusively to regulatory categories for what should be protected, or if we take our direction from auditors’ checklists to understand our potential exposure, we will continue to be play-things in the hands of people who will assess our risk for us, but for their benefit, not ours. It’s time we started focusing on the risks of cyber security, rather than the audits.