During the late-1990s dot-com boom, businesses raced to incorporate the internet into their operations, chasing the limitless opportunities it promised. Few foresaw the security vulnerabilities and attack vectors that would emerge. Much of today’s cyber risk landscape stems directly from that rapid, security-light internet expansion.
Today’s AI (artificial intelligence) revolution resembles that era. Organizations are adopting AI technologies rapidly for both efficiency and competitive advantage, but risk exposures linked to these advancements are only beginning to surface. Where defenders may have initially benefited from advanced AI-driven threat detection, the playing field has levelled as attackers have quickly learned how to leverage and manipulate this technology. AI now empowers both security teams and adversaries alike.
Why are Risk Assessments Important?
A risk assessment identifies and prioritizes the threats that could harm your organization from cyberattacks and compliance failures, to operational disruptions. AI alone introduces a multitude of risks, including data poisoning, prompt injection, deepfakes, and model inversion. Those are on top of the same risks that have plagued businesses for years, such as ransomware and supply chain attacks. A risk assessment helps your organizational leadership understand which risks are most likely and most damaging, so you prioritize your resources to prepare effective response tactics. While every business faces similar cyber risks, each industry has different risk exposures, each industry has different levels of risk exposure.
Healthcare
- Medical device vulnerabilities: Internet-connected medical devices (IoMT) and legacy systems contain exploitable security flaws that threaten patient safety
- AI-accelerated ransomware: Attackers use AI to rapidly identify and target critical patient care systems, maximizing disruption potential
- Life-or-death pressure: Healthcare ransomware demands reach extreme levels because service disruptions directly endanger patient lives
Finance
- AI-powered deepfake fraud: Sophisticated audio and video impersonations convince employees to authorize fraudulent wire transfers worth millions
- Micro-transaction theft: AI orchestrates high volumes of small fraudulent transactions designed to fly under automated detection thresholds
- Insider threat amplification: The high-value nature of financial data and systems makes employees more vulnerable to temptation, recruitment, or coercion
Retail
- Seasonal attack timing: Cybercriminals strategically target peak shopping periods when transaction volumes surge and security teams are stretched thin
- Point-of-sale (POS) vulnerabilities: Payment terminals and e-commerce platforms present expanding attack surfaces for card skimming, data theft, and transaction interception
- AI-enhanced social engineering: Sophisticated phishing campaigns impersonate customer complaints or support requests to trick employees into revealing credentials or processing fraudulent transactions
Transportation
- Legacy system vulnerabilities: Aging infrastructure and outdated control systems may lack modern security features, while IoT sensors across logistics networks create thousands of potential entry points
- Autonomous vehicle exploitation: Self-driving and semi-autonomous vehicles face threats from sensor spoofing, GPS manipulation, and remote-control system compromise
- Fleet tracking: GPS and telematics systems in connected vehicles can be exploited to track high-value cargo for planned thefts
Hospitality
- Guest data exposure: Breaches compromise sensitive personal information, including payment card data, passport details, loyalty program accounts, and travel itineraries
- Third-party platform vulnerabilities: Integration with booking sites, travel agencies, and payment processors expands the attack surface
- AI chatbot manipulation: Customer service chatbots can be compromised or manipulated through prompt injection attacks to leak guest information, provide fraudulent booking confirmations, or social engineer employees
Nonprofit
- Resource constraints: Limited budgets force nonprofits to operate with minimal cybersecurity staffing, outdated systems, and inadequate protective technologies
- AI-powered donor fraud: AI can imitate a nonprofit’s messaging to fool donors into sending money to scammers or sharing payment details
- Workforce vulnerability: Small, mission-focused teams often lack regular security training, making staff susceptible to phishing attacks, social engineering, and deepfakes
Public Companies
- Shadow AI proliferation: Employees across departments deploy unauthorized AI tools and services without IT oversight, creating blind spots and risk exposure
- Executive-targeted social engineering: AI-generated deepfakes and personalized phishing campaigns impersonate C-suite executives to authorize fraudulent transactions or extract confidential business information
- Market manipulation attacks: Coordinated AI-driven disinformation campaigns can spread false information about earnings, leadership, or operations to inflict reputational damage and stock manipulation
Shadow AI is a Real Problem
IT leaders have been grappling with shadow IT for years now, but shadow AI is now a very real problem as well. According to the 2025 IBM Cost of a Data Breach Report:
- One in five organizations suffered a breach due to security incidents involving shadow
- Breaches involving shadow AI were $670,000 more than the average breach price tag
- These incidents also resulted in more personally identifiable information (PII)
It’s not just shadow AI that is the problem, however. According to a recent 2025 survey, 78% of CISOs now say AI-powered cyberattacks are significantly impacting their organizations. So how does an organization properly leverage this technology while also protecting against the risk that it introduces?
Reasonable Security is the Answer
No one expects your business to eliminate all AI-related risk. What regulators, auditors, and courts do expect is that your leadership implement security measures a reasonably prudent organization would apply, given its size, industry, data sensitivity, and risk profile. To ensure compliance and protection from litigation, your AI programs and their accompanying security will need to meet this “reasonable” test.
Fortunately, DoCRA provides a structured approach and framework that outlines how to get there and show it. Organizations that adopt the DoCRA methodology demonstrate to regulators, auditors, and courts that they have systematically evaluated AI risks, implemented appropriate controls, and maintained ongoing oversight. It is an approach that HALOCK has helped create and implement for years now. Learn how to reduce AI-related risks to your organization.
Review Your AI Security Posture
Be Our Guest at FutureCon Chicago 2026
Enjoy breakfast and lunch while connecting with colleagues and industry executives.
Session: Why AI Can’t Fix Your Cyber Risk (and Might Be Making It Worse)
Speaker: Chris Cronin, ISO 27001 Auditor | Partner, HALOCK and Reasonable Risk | Board Chair, The DoCRA Council
DATE: Thursday, January 29, 2026
WHERE: Live In Person | Virtual | Hybrid @ Chicago Marriott Oak Brook
CREDITS: Earn up to 10 CPE Credits
