The insurance industry is one of the leading sectors in AI (artificial intelligence) adoption. Insurers apply AI in a range of scenarios, such as automating underwriting decisions, streamlining claims processing, fraud detection and scoring, risk analysis and evaluation, pricing personalization, and customer interaction.

At the same time, insurers are in possession of some of the most complete personal data sets in existence. Identity data, health records, financial histories, behavioral analytics, claims notes, and geolocation are just a few examples of the types of data an insurer could be handling for a given customer. As models take in more of this information, the impact of a breach or data misuse grows.

Attackers are aware of this, which is why they are increasingly targeting insurers with ransomware, credential-theft campaigns, and supply-chain attacks that target third-party service providers.

Regulatory and Privacy Considerations

Insurers are subject to a complex array of state and federal regulations and guidelines, such as:

  • NAIC Insurance Data Security Model Law (adopted by many states)
  • State privacy laws, e.g., CCPA/CPRA in California
  • HIPAA in the case of insurers as covered entities (CE) or business associates (BA)
  • GLBA for certain lines of insurance
  • State-mandated cybersecurity standards for carriers and brokers

AI also brings additional obligations by expanding use cases for data and greatly increasing the volume of personal information flowing through vendors, models, and cloud-based systems. Insurers must ensure any AI model that touches consumer data meets privacy and security standards. This should include encryption requirements, data minimization protocols, proper disclosures to customers, and strong vendor oversight.

Recent breaches have demonstrated that insurers are attractive targets partly because of how much sensitive data is aggregated across millions of customers and the potential for broad impact when that data is compromised.

 

How Insurers Must Adapt

Insurance carriers and brokers need improved controls around AI governance, model management, third-party risk management, and data protection. These requirements include, but are not limited to:

  • Assessing privacy risks, data retention issues, and discriminatory outcome potential in AI models
  • Mapping data flows, especially across underwriting, claims, data analytics, and vendor systems
  • Requiring BAAs or equivalent vendor agreements if PHI or sensitive personal data is processed
  • Conducting annual risk assessments, which should include AI technologies
  • Hardening identity controls and fraud-detection workflows (both common targets)
  • Testing API security and claims-automation systems for abuse
  • Maintaining compliance with the NAIC Model Law and applicable state privacy mandates

Organizations that take a proactive approach to AI risk and cybersecurity can bolster customer trust, enhance regulatory defensibility, and prevent costly operational disruptions.

With HALOCK, organizations can establish a legally defensible security and risk program through Duty of Care Risk Analysis (DoCRA). This balanced approach provides a methodology to achieve reasonable security as the regulations require.

 

Review Your Security and Risk Posture