How Reasonable Security and DoCRA Can Strengthen Your AI Governance & Cyber Risk Management

The artificial intelligence (AI) world is moving faster than most of us thought possible. With its arrival, a new set of cyberrisks. For which most companies have no playbook. As businesses race to operationalize generative models, large ML systems, and AI-enabled services, the questions shift from “do we need more security?” to “how can we show that our security is reasonable, justifiable, and defensible against legal/regulatory and stakeholder expectations?”

Two related approaches, “reasonable security” and DoCRA, are becoming foundational to any defensible AI security and governance strategy for the next few years. It’s more than basic cybersecurity and risk management; it’s accountability, resilience, and trust.

 

The AI challenge: novel risks and accountability

AI systems bring with them new classes of risk: data poisoning, prompt injection, model inversion, deepfakes, synthetic identity, model supply-chain compromises, or even unknown/unexpected bias or misuse. Risks that have an uncertain likelihood, unknown threat actors, or reputation, regulatory, or safety implications. We are far from “armed with good data” and clear choices for how to secure AI systems; rather, we have a range of controls to choose from, an ever-changing picture of threats, stakeholders with differing risk appetites, and responsibility for those whom we impact, all under time and resource constraints. This is a place where “because I have a checklist of controls from a standard” is insufficient: to whom are you accountable, and how do you justify the decisions that were made and why?

 

“Reasonable Security” is the appropriate balance

Reasonable security is a criterion in state cybersecurity & privacy laws, referenced by regulators and advisory groups, and invoked in litigation. A reasonable security standard poses the question: Have you implemented administrative, technical, and physical safeguards that a reasonably prudent organization would apply given its size, complexity, sensitivity of data, and risk profile? Because AI brings both more potential for harm and more complexity in how to manage risk, your AI programs and their accompanying security will need to meet this “reasonable” test. It’s critical not just from a security perspective, but from a governance, legal, and trust perspective, too.

 

So…how do you prove that? With DoCRA.

 

How does the DoCRA Framework Demonstrate Reasonable Security?

If “reasonable security” is the requirement, then DoCRA is a structured approach and framework for how to get there and show it. DoCRA (Duty of Care Risk Analysis) is a method for guiding risk evaluation, weighing the harm to different parties, proportioning the effort of safeguards, and justifying decisions so that they are defensible. For AI, where risks are diffused, evolving, and multifaceted, DoCRA will enable risk analysis that is proportionate and well-documented: 

  • Consider what harm (to end users, affected third parties, to the business) an AI system might cause.
  • Estimate the likelihood and impact of those harms, using whatever evidence is available (threat intelligence, history of models/applications, etc. ).
  • Choose safeguards that can lower that risk, while factoring in their cost, burden, and their relationship to your mission.

 

Select controls, document your reasoning, and monitor residual risk. The main considerations are the organization’s mission, objectives, and obligations with your risk priorities.

This process speaks to what regulators, auditors, courts, and interested parties care about:

  • the potential for harm to others,
  • the responsibility/accountability of the organization,
  • and the reasonableness of the burdened controls.

 

What about AI Model Procurement & Supply Chain?

When acquiring a third-party model or AI-as-a-Service (AIaaS), an organization must document how it determined that vendor’s security posture, model provenance, patching/update practices, and incident history, and then how the chosen safeguards were proportionate to the risk.

 

What about AI Operational, “in-the-wild” Model Deployment?

For AI models and systems in production, the organization must justify that it has identified the possible harms (misclassification, hallucination, misuse), assessed and documented their likelihood, and chosen which monitoring/logging/rollback procedures it will use and keep under periodic review.

 

How do We Deal with AI and Incident Readiness, Response, & Resilience?

AI incidents (model misuse, data leak, adversarial model breaking) will not always follow expected patterns. Showing that your security program is built on a documented chain of risk justification and reasoning is what will differentiate you in the defense of such incidents.

 

Why is Regulatory and Litigation Defensibility Crucial in AI and Cybersecurity?

Courts and regulators will increasingly want to know, “Did you act as a reasonably prudent organization would?” As with any significant litigation or investigation, DoCRA documentation, roadmaps, and justifications become the essential evidence you need to show. Organizations should be able to demonstrate that they established ‘reasonable security’ for their environment and incorporated their mission, objectives, and obligations.

 

Continuous evolution

AI systems are dynamic (data drift, model retraining, new threats). A one-time assessment exercise does not suffice. DoCRA’s emphasis on continuous monitoring and reassessment (and documentation thereof) maps onto the shifting risk landscape for AI systems.

 

Why is “Reasonableness + DoCRA” a Solid Security Strategy for AI?

AI is rapidly becoming part of the core operation for companies across industries — autonomous vehicles, personalization engines, and decision support systems. With those deployments comes increased stake and scrutiny. Enterprises that adopt a “reasonable security” lens with DoCRA as the companion method will be able to: 

  • Engage with stakeholders (customers, regulators, partners) with a clear, rational, documented approach to balance.
  • Position your organization for evolving AI regulation that will almost certainly incorporate “appropriate safeguards” and “proportionate controls.”
  • Improve your risk maturity: applying DoCRA discipline in balancing the cost of controls vs the reduction in harms improves investment decisions, monitoring, and awareness, and aligns with the overall business mission.
  • Proactive safeguards: when the incidents occur, having the DoCRA method will mean you are already in a position to show you acted with due care.

 

AI security in the next 1-2 years will likely not be a conversation around “the latest tools and products.” It will be around governance, liability, and whether your defensible security posture accounts for that. AI suppliers that treat AI risk as a black box problem will be left exposed. Those that embrace the “reasonable security + DoCRA” duo will not only anchor their cybersecurity approach; they will also be one step closer towards a documented and stronger, more resilient risk profile.

 

References & Links

The DoCRA Council: “The Duty of Care Risk Analysis Standard (DoCRA or the Standard) presents principles and practices for analyzing risks…”

“What is the Duty of Care Risk Analysis (DoCRA)?”

Center for Internet Security (CIS): “Reasonable Cybersecurity Guide”

 

Review Your AI Security Posture