AI Risk is Like Pencil Risk
By Chris Cronin
When people ask me about AI risk, I tell them about pencil risk. Pencils are steeped in risk. Just think of it; you can plagiarize with a pencil, you can forge a signature or a famous drawing, you can draw a picture of something that never happened, you can make pencils with toxic materials, pencil factories can dispose of hazardous chemicals into nearby waterways, you can poke someone’s eye out.
I don’t reference pencil risk to downplay AI risk. AI risk is extremely important to get right. I talk about pencil risk to point out how varied a topic “AI risk” can be. AI is a tool; it’s not a topic. So, when we think of AI risk, we need to think about normal risk topics and how AI exacerbates or mitigates them.
Consider the pencil risk examples. They are actually about social harm, environmental harm, and individual harm. Those risks come from using the tool as it was designed, or creating the tool in a hazardous way, or allowing environmental harm in service of making the tool, or using the tool in a novel way to hurt others. The tool isn’t the risk; it’s our uses and abuses that are the risk.
We have no uniform way to measure AI risk, but AI risk exists. And moreover, as we create and use AI technologies, we are responsible for understanding the risks we pose and mitigating those risks so they don’t hurt us or others more than we should tolerate.
So how can we estimate and manage AI risks when there are no industry-standard approaches for doing that, and when AI technologies and risks change so rapidly?
At HALOCK, we apply two rules for analyzing and managing AI risk: apply DoCRA when you’re able, otherwise use common sense.
The commonsense approach routinely asks whether we might violate a prohibition when we use an AI tool in a use case. If we might, then what do we do about that?
Figure 1 – AI Risk Decision Engine

Image Source: HALOCK Security Labs on AI Risk
When we don’t have sophisticated risk models or data available to us, we can still ask ourselves what we can do to prevent harms that we have not foreseen. AI presents an excellent use case for this.
Imagine that we want to implement an AI agent to handle routine customer service calls.
- We would ask what that agent could get wrong – as we should do when we train new customer service employees. We can then ask, if those violations (errors or attacks) would be serious enough, what are our opportunities to detect and prevent those errors? (And we should speak with experts who are tracking what has gone wrong with AI agents).
- If we can’t conceive of any violations that would cause serious consequences, then we should be able to use the agent. If we can foresee violations whose harm we would invest against, then we should determine what intermediaries we could use to detect and prevent them.
- If we can’t think of intermediaries that would detect or prevent the violation, then we should not do it.
This, of course, is a very broad way to analyze AI risk. But without a finer risk analysis method, it’s a commonsense approach.
In future newsletters, we’ll discuss two other, more sophisticated risk analysis approaches that use real data and that incorporate DoCRA into decision-making. These risk assessment methods are AI Development Risk Analysis and AI Alignment Risk Analysis.
Our article on AI Development Risk Analysis will use OWASP guidance to evaluate the reasonability of development safeguards. Our future article on AI Alignment Risk Analysis will describe our model for determining whether AI use cases align with an organization’s ethics and governance responsibilities, and what safeguards would reasonably allow innovation while protecting others from harm.
Until then, be safe out there.
Review Your AI Security Posture
