Artificial intelligence (AI) is transforming government agencies’ service delivery, data analysis, automation, autonomous monitoring, and threat detection. AI solutions are becoming commonplace for government agencies to improve efficiency, reduce costs, and scale citizen services at the federal, state, and local levels. Intelligent systems bring game-changing opportunities and new cybersecurity risk profiles, and the risk profiles tend to be more unique in the public sector.

Government agencies control and process some of the most sensitive information in society, from national security data, personally identifiable information (PII), veterans’ and beneficiaries’ healthcare records, to smart infrastructure control and public safety systems. AI also increases complexity and attack surface as it touches on decision support tools, data pipelines, citizen-facing engagement systems, and identity verification.

Cybersecurity teams working in the government sector must also address continuously evolving risk while building stakeholder trust in public services and meeting strict regulatory standards. This article covers how AI is changing cybersecurity risk in government, what U.S. regulations and government agencies impact risk, how government agencies’ incident response (IR) and cyber insurance requirements are changing, and how Duty of Care Risk Analysis (DoCRA) and reasonable security frameworks apply to public organizations.

 

How AI Is Changing Cybersecurity Risk in Government

AI both amplifies and accelerates the existing capabilities of both defenders and attackers. On the one hand, agencies increasingly are using AI and ML (machine learning) to enhance anomaly detection, automate threat hunting, and prioritize vulnerabilities based on richer, real-time data signals. These AI-driven tools and processes help identify and prioritize new risks faster than traditional rule-based systems.

For example, the U.S. Department of Homeland Security (DHS) and the Cybersecurity and Infrastructure Security Agency (CISA) are both championing the use of AI-driven cybersecurity tools to improve national resilience.

On the other hand, threat actors also have access to AI that they can use to improve reconnaissance, target selection, and even exploit automation. Attackers can use machine learning to scan large networks, identify misconfigurations, and prioritize high-value targets for follow-on activity. AI can also be leveraged to automate attacks, including scanning for new vulnerabilities, generating social engineering messages that mimic internal communications, and testing malware payloads against multiple AV products.

 

The advent of AI also fundamentally changes the risk landscape in several ways:

AI systems are attack vectors. AI both increases and concentrates risks as AI components become critical digital assets that attackers must consider and target. Machine learning models can fail, be evaded, or manipulated through specialized attacks like model inversion, adversarial input, and data poisoning. Government AI systems are at risk of all the traditional software threats, plus these unique ML threats.

AI extends attack surfaces. Anywhere AI is deployed to enhance or automate decision-making in sensitive systems, it presents a new attack surface, particularly where AI components are integrated with operational technology (OT) used in critical infrastructure control, public safety, and emergency response networks.

AI accelerates reconnaissance and exploitation. Attackers are also using machine learning tools to more efficiently scan large government attack surfaces, identify misconfigurations, and prioritize high-value targets for follow-on activity. Attack techniques and tools will only continue to improve and accelerate.

AI also lowers the barrier to entry for some adversaries by automating social engineering and reconnaissance processes. This means non-state actors with access to ML tools can also more effectively target government systems.

AI blurs privacy and security controls. Privacy-preserving computation techniques, such as homomorphic encryption, federated learning, and differential privacy, can support better data controls while being used for AI model training and execution. However, they also present new risks to security teams as encryption and privacy-preserving controls are bypassed, and algorithms are simultaneously making decisions that impact privacy in real-time.

AI introduces ambiguity around accountability. Complex ML models can introduce opaque or “black-box” decision-making processes that are harder to explain and audit. Accountability around faulty decisions can also get obscured among many participants in the model development, data collection, and deployment ecosystems.

 

Why Government Agencies Are Valuable Cyber Targets

Public sector organizations have high-impact assets. Everything from national defense, public health, emergency management, election systems, and social safety net programs depends on a secure, resilient IT infrastructure. Successful cyberattacks against government networks and systems can disrupt services, expose the data of millions of citizens, and reduce public trust.

Ransomware attacks against municipalities, state agencies, and federal contractors have significantly increased in both frequency and impact in recent years, resulting in extended outages and multi-million-dollar recovery and ransom payments. Advanced persistent threats (APTs) linked to nation-state actors continue to target U.S. government networks to steal intellectual property and sensitive national security information.

Nation-state actors, hacktivists, and criminal syndicates are only becoming more sophisticated in their cyberattacks and are very interested in public sector targets. These threats are also exacerbated by risk factors that are common in the government IT environment:

Legacy systems. Agencies often run complex, aging IT systems that can be difficult to patch or replace. These systems can have inherent vulnerabilities that present attractive targets to attackers.

Decentralized procurement. Agencies often have complex and decentralized procurement processes that make it difficult to track all software and services.

Complex integration. Government systems and applications can be tightly integrated with other government agencies and private sector contractors. Supply chain risk extends to all connected vendors.

 

Regulations and Standards Affecting Cyber Risk in Government

Government agencies are subject to rigorous cybersecurity and data privacy regulations and standards. A summary of U.S. cybersecurity regulations and government agency guidance impacting risk for AI systems:

Federal Information Security Modernization Act (FISMA). FISMA applies to all federal agencies and government contractors and mandates that organizations implement “appropriate administrative, technical, and physical safeguards” for their information systems. FISMA risk assessment and continuous monitoring requirements apply to systems that incorporate AI.

Executive Order on Improving the Nation’s Cybersecurity (EO 14028). President Biden’s Executive Order was issued in May 2021. It increases the adoption of zero trust architectures (ZTA), endpoint detection and response (EDR), federal breach reporting requirements, and drives government-wide adoption of modern risk-based security controls and practices. Requirements like zero trust, EDR, and breach reporting also have clear relevance to AI-enabled systems.

OMB Circular A-130. Circular A-130 is an OMB policy that establishes baseline requirements for federal information resources management, including information security and privacy policy requirements. This ATO applies to government AI systems and components.

NIST Cybersecurity Framework. NIST’s Cybersecurity Framework (CSF) outlines five voluntary risk management steps for organizations (Identify, Protect, Detect, Respond, Recover). The guidance here is used as a standard by regulators and auditors for assessing reasonable security practices.

NIST Artificial Intelligence Risk Management Framework. NIST’s AI Risk Management Framework (AI RMF) provides an approach for identifying, managing, and mitigating risk to organizations and individuals related to AI.

Federal Risk and Authorization Management Program (FedRAMP). FedRAMP provides security assessment and authorization standards that cloud services used by government agencies and departments must meet to be approved for use.

State and Local Cybersecurity Standards. Many states have enacted laws or executive orders mandating their state agencies meet particular cybersecurity standards for incident reporting, risk management, or third-party assessment.

Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA). CIRCIA is a new bill that will mandate cyber incident reporting and ransomware payment disclosure for covered entities and critical infrastructure sectors. When it goes into effect, CIRCIA will affect many government agencies and service providers.

 

Evolving Incident Response Expectations for Government

Incident response in a government environment must consider the broader impact of disrupted services and lost data. If critical public services are affected, incident response will likely need to involve coordination with federal partners, law enforcement, and public communications to manage both technical and reputational consequences. Government IR playbooks should include:

  • Roles and escalation paths across agency leadership, security teams, and third parties. Government agencies likely have specific escalation and response protocols for contacts with federal partners, including relevant law enforcement.
  • Playbooks for AI-specific and known threats (ransomware, supply chain compromise, APTs, adversarial ML).
  • Processes to coordinate with federal partners (CISA, FBI, state fusion centers, and more).
  • Documented processes for post-incident analysis to improve for next time and reporting for compliance reasons.
  • Regular incident response exercises that incorporate realistic AI threat scenarios.

 

The Role of Cyber Insurance in Government Risk Management

The government sector is just starting to see the influence of cyber insurance on security programs. Insurance requirements for government entities and their vendors will often include specific and maturing requirements for security program maturity.

Requirements you can expect to see from cyber insurance carriers include:

  • Identity and access controls (multi-factor authentication – MFA).
  • Segmentation, network hardening, and least privilege architecture.
  • Continuous monitoring, logging, and detection capabilities.
  • Vulnerability scanning and patching.
  • Incident response plan (IRP) and tabletop exercises.

Government entities and their contractors that can’t demonstrate a risk-based approach may end up paying higher premiums, having carve-outs for specific classes of claims, or getting limited or no coverage. There is a growing trend in the space for well-documented, defensible security decisions.

 

How DoCRA and Reasonable Security Apply to Government Cybersecurity Risk

Government and public sector entities must consider additional risk in mission continuity and legal compliance, lack of anonymity and direct accountability to citizens, and reduced budget flexibility to meet citizen expectations. This makes Duty of Care and Risk Analysis (DoCRA) even more important to help document why particular controls are not reasonable in light of all factors, including threat likelihood, impact, and potential cost to citizens. Reasonable security provides a flexible approach to establishing and documenting:

  • Controls are proportional to the risk.
  • Why particular controls were chosen or not chosen.
  • What the residual risk was and how it was communicated to leadership.
  • Whether a decision is defensible and meets regulatory expectations and fiduciary responsibility.
  • This also supports a more agile process for making defensible decisions in real-time as opposed to simply ticking boxes from static checklists.

 

What are Some Practical Government Use Cases for DoCRA?

Cybersecurity decision-makers can use DoCRA in several government scenarios to help make real-time decisions about what to do:

  • State health department. A state health department uses DoCRA to classify an AI-assisted public health analytics system as high risk and prioritize data integrity controls. The team also documents residual risk for lower-impact systems as reasonable.
  • City emergency operations center. An EOC leverages DoCRA to justify using budget to buy AI threat detection for critical sensors and radios to ensure emergency services, but manages expectations for less critical administrative systems.
  • Federal regulatory agency. A federal regulatory agency documents why certain AI workloads are air-gapped in secure cloud enclaves with enhanced monitoring, with a rationale of balancing performance with security.

 

In Summary: AI Cyber Risk and Government Agencies

AI is remaking the cybersecurity risk landscape for government agencies. It brings operational and strategic opportunity but also brings risk in new ways that call for updated thinking, tools, and governance. Government organizations can protect their systems, citizens’ data, and trust by making sure their risk programs are consistent with government regulations and expectations, IR preparedness, cyber insurance requirements, and defensible, reasonable risk analysis using DoCRA and reasonable security practices and frameworks.

 

To successfully approach managing risk in the age of AI, government agencies and organizations should incorporate reasonable security into their risk strategy. 

Establish reasonable security through duty of care.

With HALOCK, organizations can establish a legally defensible security and risk program through Duty of Care Risk Analysis (DoCRA). This balanced approach provides a methodology to achieve reasonable security as the regulations require.

 

Review Your Security and Risk Posture

 

Read more AI (Artificial Intelligence) Risk Insights

 

 

 

References and Sources

  1. CISA. https://www.cisa.gov/critical-infrastructure-sectors

  2. CISA guidance on AI and cybersecurity. https://www.cisa.gov/topics/artificial-intelligence

  3. NIST Cybersecurity Framework. https://www.nist.gov/cyberframework

  4. NIST AI Risk Management Framework. https://www.nist.gov/itl/ai-risk-management-framework

  5. FISMA context and requirements. https://csrc.nist.gov/projects/risk-management/fisma

  6. Biden Executive Order on cybersecurity. https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/

  7. OMB Circular A-130. https://www.whitehouse.gov/wp-content/uploads/2016/07/Circular-AP130.pdf

  8. FedRAMP. https://www.fedramp.gov

  9. CIRCIA reporting requirements (Federal Register). https://www.federalregister.gov/documents/2024/04/04/2024-06526/cyber-incident-reporting-for-critical-infrastructure-act-circia-reporting-requirements

  10. FBI Internet Crime Report on growing threats. https://www.ic3.gov/Media/PDF/AnnualReport/2024_IC3Report.pdf

  11. Assessment of AI Threats and Adversarial Risk. https://www.nist.gov/system/files/documents/2023/09/15/ai-adversarial-ml-cyber.pdf

  12. HALOCK Duty of Care Risk Analysis (DoCRA). https://www.halock.com/docra/

  13. HALOCK Reasonable Security. https://www.halock.com/reasonable-security/