Why AI Can’t Fix Your Cyber Risk (and Might Be Making It Worse)
Since the release of ChatGPT 3.5 in 2022, AI has become the default answer to almost every cybersecurity problem—including risk assessments. AI and large language models (LLMs) can generate polished, confident-looking risk analyses in seconds. But LLMs are not probability engines, despite what their vendors will tell you.
In this session, Chris Cronin will demonstrate why AI is fundamentally incapable of managing cybersecurity risk on its own—and how overreliance on AI can actually increase organizational risk. Attendees will see where AI outputs break down, why “AI-generated” does not mean “defensible,” and how regulators, auditors, and courts still expect human decision-making grounded in reasonableness.
Chris Cronin, creator of the Duty of Care Risk Analysis Standard (DoCRA), has advised governments, courts, Fortune 100 companies, and startups on cybersecurity risk analysis and regulatory compliance. His work centers on helping organizations make risk decisions that can be explained, justified, and defended—not just automated.
Chris will provide the simple rule Reasonable Risk uses to decide when AI belongs in their SaaS platform—and when it does not. Attendees will leave with a clear framework for using AI as a “supporting tool” rather than a decision-maker, and a practical understanding of how DoCRA principles are shaping AI, cybersecurity, and privacy laws around the world.
SPEAKER BIO
Chris Cronin is a partner at HALOCK Security Labs and Reasonable Risk and is the Chair of the DoCRA Council. He is the principal author of the DoCRA Standard and CIS RAM, the Center for Internet Security’s Risk Assessment Method. Chris’s work as an expert witness has helped clients, regulators, and litigators evaluate the reasonableness of security controls during post-breach legal action. Chris is an active member of the Sedona Conference, a non-profit think tank for creating and publishing commentaries and guidance to the bench, bar, and the public.
DISCOVER MORE ABOUT DUTY OF CARE RISK ANALYSIS (DoCRA)
It is your duty of care to provide reasonable safeguards for protected data.
To successfully manage risk in the age of AI, mental health operations should incorporate reasonable security into their risk strategy.
Establish reasonable security through the duty of care.
With HALOCK, organizations can establish a legally defensible security and risk program through Duty of Care Risk Analysis (DoCRA). This balanced approach provides a methodology to achieve reasonable security as the regulations require.
What are DoCRA and Reasonable Security? How are they related?
With the widespread use of AI (artificial intelligence), it is essential to understand your security and risk profile for your operations.
Review Your Security and Risk Posture


