FutureCon Chicago Cybersecurity Conference 2026

Once again, HALOCK Security Labs and Reasonable Risk will be partnering up at FutureCon to provide helpful industry insights. This year, Chris Cronin will be touching on the subject everybody is talking about in 2026:  AI.

Artificial intelligence (AI) can generate cybersecurity risk assessments in seconds—but speed and confidence don’t equal accuracy, accountability, or defensibility. In this session, Chris Cronin exposes why AI alone can’t manage cyber risk, how overreliance can make organizations more vulnerable, and where human judgment remains legally and operationally essential.

 

Be Our Guest at this Insightful Event

DATE: Thursday, January 29, 2026

WHERE: Live In Person | Virtual | Hybrid @ Chicago Marriott Oak Brook

CREDITS: Earn up to 10 CPE Credits

 

Session

Why AI Can’t Fix Your Cyber Risk (and Might Be Making It Worse)

Speaker: Chris Cronin, ISO 27001 Auditor |  Partner, HALOCK and Reasonable Risk  |  Board Chair, The DoCRA Council

 

Panel Discussion

Securing the Future: CISO Insights and Industry Leaders Discussing Current Cyber Threats and Strategic Defense Practices

Panelist: Terry Kurzynski, CISSP, CISA, PCI QSA, ISO 27001 AUDITOR


More Session Details:

Why AI Can’t Fix Your Cyber Risk (and Might Be Making It Worse)

Since the release of ChatGPT 3.5 in 2022, AI (artificial intelligence) has become the default answer to almost every cybersecurity problem—including risk assessments. AI and large language models (LLMs) can generate polished, confident-looking risk analyses in seconds. The problem? Confidence is not competence, and speed is not accountability. When used as a substitute for human judgment, AI-driven risk assessments can obscure real exposure, misrepresent priorities, and create a dangerous illusion of control.

In this session, Chris will demonstrate why AI is fundamentally incapable of managing cybersecurity risk on its own—and how overreliance on AI can actually increase organizational risk. Attendees will see where AI outputs break down, why “AI-generated” does not mean “defensible,” and how regulators, auditors, and courts still expect human decision-making grounded in reasonableness.

Chris Cronin, creator of the Duty of Care Risk Analysis (DoCRA) Standard, has advised governments, courts, Fortune 100 companies, and startups on cybersecurity risk analysis and regulatory compliance. His work centers on helping organizations make risk decisions that can be explained, justified, and defended—not just automated.

Chris will reveal the simple rule Reasonable Risk uses to decide when AI belongs in their SaaS platform—and when it absolutely does not. Attendees will leave with a clear framework for using AI as a supporting tool rather than a decision-maker, and a practical understanding of how DoCRA principles are shaping AI, cybersecurity, and privacy laws around the world.

 

About Our Speaker

Chris Cronin is a partner at HALOCK Security Labs and at Reasonable Risk. He is also the Chair of the DoCRA Council, a nonprofit that promotes the use of reasonableness in cyber risk analysis and law. He is the principal author of the DoCRA Standard and CIS RAM, Center for Internet Security’s Risk Assessment Method. Chris works with organizations of all sizes and serves as an expert witness in post-breach cases. Chris’s current focus is on helping organizations use the new demand for governance to their advantage.

Learn how you can efficiently and effectively manage your risk program with Reasonable Risk, the only GRC SaaS tool with a Proven Governance System™.

 
More about the Reasonable Risk GRC SaaS Tool