HALOCK Security Labs and Singulr.AI bring together real-world insights in this 50-minute webinar.

TITLE: AI RISK INSIGHTS FROM THE FIELD, a practical guided tour

Wednesday, February 11, 2026 | 12 PM CT | Virtual

Organizations are moving fast as FOMO accelerates the use and development of AI models.  Will the benefits outweigh the risks?

Discover practical methods and tools to identify and manage AI-related risks during the expert-led session.

OBJECTIVES:

    • What risks does AI pose?
    • How do I assess my AI-related risks?
    • Methods and tools to address known AI risks.

 

VIEW THE RECORDING

SPEAKERS:

Terry Kurzynski, Senior Partner – HALOCK

Richard Bird, Chief Security Officer – Singulr AI

Richard Bird has been around the block. In fact, if you happen to be at any cybersecurity somewhere in the world, there’s a pretty good chance you’ll see him on the corner heading in with you. Richard is a 6-time C-level exec in both the enterprise and start-up worlds, an author, a media personality, and an ever-present voice in identity security, AI security, digital consumer protection, and privacy rights.

In the cybersecurity world, he is best known for his prior roles, such as Global Head of Identity for Consumer Businesses at JPMorgan Chase and Chief Customer Information Officer at Ping Identity. Richard is frequently quoted on cybersecurity topics and headline news events in the media and has been featured by Fast Company, The Wall Street Journal, CNBC, Bloomberg, The Financial Times, Business Insider, CNN, Dark Reading, and TechRepublic.

 

TRANSCRIPT

This is the AI risk insights from the field. This is HALOCK and Singulr, a joint partnership.

As we talk about myself, I’m Terry Kurzynski, founding partner of HALOCK Security Labs.

Bunch of credentials. We’re gonna skip a lot of this because we’re gonna make up some time. How about that? But, Richard, you wanna give a few words on yourself from Singulr?

Yeah. Super easy. Just look me up on social media or type in Richard Bird cybersecurity identity or AI security. You’ll find out everything about me.

Well, we met last year at the Rocky Mountain Information Security Conference. I appreciate all your insights on AI security, which led to this partnership, so I appreciate that. So, AI risk considerations, this is the agenda. Cybersecurity risk, AI standards and regulations.

We’re gonna talk about the benefits of AI versus human suffering, the OWASP top ten LLMs, and then solutions and summary. So let’s get into it. Some cyber stats. So it is estimated that eighty percent of phishing attacks are AI-generated.

Sixty-three percent of companies are implementing new technologies, including Gen AI, to support cybersecurity and employment shortages. That’s an interesting one. Ninety-seven percent of the companies are reporting Gen AI security issues. So start out with some fun facts.

So what do we mean by AI risks? Let’s start there. Increased cybersecurity risk with our use of technology. So we can think of this as the additional control side of it.

Right? We’re we have some issues with cybersecurity.

Our use of AI may create unintended consequences or harm. Outside organizations may use AI and put us at risk, and then there’s malicious use of AI by threat actors. So I’m gonna dig into the two of these. Right?

So we have an increased risk of cybersecurity risk with AI. And again, this is more about the control side. Right? Data leakage, platforms, you know, you get these MPC servers.

They have a lot of secrets and tokens, API security, etcetera. So these are the technical security controls and additional vectors that might leak out data or create issues relating to cybersecurity. But we also have this new one that we need to consider, the public benefit alignment. And this is where we really now need to think about the kinda human suffering side of things.

Right? So we might have bias or IP infringement, job displacement, false information, or disinformation. We might have environmental impacts, as we’ll see, and misuse by threat actors. So these are the two we’re gonna kinda dig in today.

And this public benefit alignment, I’m gonna talk with that one first, which is that there are benefits we may share with the use of AI, but there are also these risks that we pose, and we need to think about the balance of these for this public benefit alignment. Some of the regulations. So in the US, the California Consumer Privacy Act (CCPA) has some updates. So it requires AI risk assessments for when there is automated decision making, requires cybersecurity audits for reasonable controls, requires privacy risk assessments, and this is a big one: companies have to report the risk assessments to the CPPA, the guiding body in California over the CCPA.

That’s kind of a new thing. No one’s ever had to submit their risk assessments to a regulator before. These are the dates for enforcement. So that’s one to think about at a state level.

If you’re doing business in that state and you meet the requirements, you’ll have to consider these things. The automated decision-making could be things like insurance underwriting or applying for a loan. Right? You might have a workflow.

Those are maybe automated decision-making models or maybe health outcomes, etcetera. They’ll need these AI risk assessments.

Alright. And also, the US SEC has a cybersecurity and emerging technologies group, and their focus, their mission is on investigating AI fraud, AI-themed scams, cybersecurity deception, and false statements about emerging technologies, including AI. So they’re gonna investigate misleading AI disclosures, you know, maybe overstatements of AI-driven financial strategies, and AI-driven deception and online scams. So they’re protecting the investors, but they’re also looking into these AI scams out there.

So there’s nothing really at the federal level in the US. That’s the best we have, which is like a deepfake. That’s a bill right now. So there is no real federal legislation for AI.

But in the EU, there is the AI Act, as we’re calling it, in the EU, and that came out in twenty-one. And it had specific objectives. I’ll just kinda jump to the bottom here, which is they want to facilitate the development of a single market for lawful, safe, and trustworthy AI applications to be innovated. So that’s really the nature of this.

But the issue with the AI Act, as well as the Cyber Resilience Act, which is a sister companion, is that neither one describes in detail how a risk assessment should be performed. Right? Until Etsy came out and published in two twenty twenty three one zero three nine three five version one point one point one. But anyway, it explicitly calls out how to perform a cyber risk assessment.

So, specifically, they call out Duty of Care Risk Analysis (DoCRA) in the standard. So if you go right to eight point one in there, it actually specifically calls out the standard. So there is instruction on how to perform a risk assessment for AI in the EU, and you can follow the Etsy one zero three standard nine three five. So what is the duty of care?

It’s been brought up here.

As organizations think about delivering their goods and services, they need to think about the harm they can cause. Right? We do that as people. It’s called the reasonable person sort of standard, but, you know, organizations need to think about their duty of care.

If you are breached and your case goes to litigation, the judge will determine whether you know were performing your duty of care. Did you consider the harm to all parties? Did you consider the gravity of those injuries, and then did you put in non-burdensome controls that would have brought that down to an acceptable level of risk? So the legal concept of duty of care and due care requires that organizations demonstrate that they use controls and they bring that risk down to an acceptable level.

Okay? And that’s really the net of it.

If you think about it this way, this is a great picture. I love this. There’s a certain risk we might be willing to take on as an organization here on the left, and that risk is very calculated. But what we may not understand is that while we’re able to take on maybe a certain amount of risk, we might be dragging others into a bigger risk, and that is the concept of duty of care.

As we go about our business and create our products and goods, we need to consider the harm to others. And that might be clients. It might be business partners. It might be our own employees, but it also could be just the general public and mankind.

It does not have to be limited to a contractual, you know, partnership with any particular entity at all.

So, the Duty of Care Risk Analysis, if we just look at the adoption, we’ve already talked about the duty of care foundation in ETSI guide, but the Duty of Care is a foundation for assessing liability in the legal system. The CIS RAM is based on duty of care risk analysis and has a hundred and forty thousand downloads. It’s probably a lot more by now.

DoCRA has been recognized by the attorney generals in the US, utilized by federal regulators for injunctive relief, and it has been utilized by forty nine states in the AG offices to define what reasonable control is. So let’s move on to these, which are just some examples in cases right now in the US where DoCRA was explicitly referenced.

If we think about applying the duty of care in the US versus the EU, we think about the word or term, reasonability. Did you consider the harm? Did you bring the risk down to an acceptable level for all potential parties and claimants? In the EU, they think about it as balancing innovation with protection. They just have a little bit different word and concept on it, but the duty of care is leveraged for both of these. So, how to meet your duty of care? Did you think through the likelihood and potential harm of human suffering versus the business impacts?

Did you think through the magnitude of that potential harm? Was it an inconvenience, or are we poisoning the environment? Right? Did you consider the safeguards to reduce the risk of harm to an acceptable level, acceptable to all potential parties?

Do you have a framework for balancing reasonableness and burdensomeness? And do you have a framework to balance innovation with protection? So these are kinda how you meet the duty of care at a high level here. We know there are AI benefits.

Right? You and I talked a lot about this, Richard. So, increased efficiency, creativity, and innovation. I know I use it every day at work and in my personal life.

Rapid problem solving. Hyper-personalization, especially in like training now. Training can be whipped up and be very, very personalized. Lower labor costs, right, because we have lower-skilled people who do a job, because now AI is augmenting what they’re doing.

Improved decision making, right, because we have all this data, and they can crunch large datasets. Autonomous systems are now coming online, and we’re starting to use security at machine speeds to defend against threat actors. Right? So that’s something new that sort of came on.

So these are the benefits. But then there’s that public benefit alignment, human benefit versus the human suffering. And we need to think through things like what about the land and, you know, water? So these AI, these hyperscaler data centers use ten times the electricity.

The one in for Meta in Louisiana alone is expected to draw twice the electricity of the entire city of New Orleans. Okay? Evaporation cooling leaves behind pollutants and contaminants. So we need to think to ourselves, okay.

That’s nice that I have a robot that’s feeding my dog while I’m on vacation, but, you know, we’re killing our children because they don’t have water, you know, clean water to drink. Right? I mean, that’s a pretty abrasive look at it, but that’s kind of what we’re talking about, that balance test.

Another example of maybe human suffering is this IP infringement. So Anthropic was incorporated as a public benefit organization, and their first act was to steal and copyright material to build their large language model, and then they got sued for that, and they paid a big fine. But that’s the idea of a public benefit is you’re supposed to consider the harm, but they didn’t. They actually went off and did it anyway.

Anyway, so this is a little bit about the IP infringement. I’m gonna keep moving on, though. The balance test. So what we really need to now consider, and we say prohibitions, we’re talking about things that are this public suffering or harm, things that these outcomes that we don’t want.

We need to think about the AI tool, its use case, and whether it gonna create one of these situations where it’s gonna create harm? Is there a safeguard that would reduce that prohibition or that harm down to an acceptable level? And if we can’t, then we shouldn’t do it. And if we can with the safeguard, then we should consider doing it; that’s really this balance test, and that’s really getting into AI governance.

The business knows that their use of AI provides the benefit to everyone is greater than the risk to any one person. This is the balance test. The benefits must outweigh the harm. Okay?

So we have these new cybersecurity risks posed by AI, and I think that this is, you know, something that, you know, Singulr is really hot on. But the DLP, the platform, the data leakage, access controls, can we trust the data? Are people able to either manipulate the datasets intentionally or unintentionally to create bias? And then we have this weak-toe token pass-through and many more.

This is just a few of them. So we have MPC servers. We talked about the MCP model context protocol servers, gateway servers having all the keys to the kingdom. A year ago, we said in the security industry, this was gonna be a problem, and sure enough, it was.

So we’re finding out that Anthropic, their MCP server, you know, gone mad over the last couple of weeks here, and there are a couple of CVEs out there. So that’s a good example where that actually happened. We’ve seen the Clawdbot dumpster fire. Right?

So now named OpenClaw because of copyright issues, but it’s an AI assistant and wants access to everything, your passwords, your personal data. Researchers discovered that all those keys were misconfigured. Access to control panels is misconfigured. Thousands of servers exposed.

Demonstrates how quickly things in AI can go wrong at lightning speed. This is not unique to Clawdbot; that’s just how AI works.

So with that, oh, one more. AI native browsers. So Perplexity has Comet. Right? This is like your own personal system, but in the corporate use, right, that you can load up.

It wants access to everything. Well, Amazon’s suing Perplexity because Perplexity is misrepresenting itself as Google Chrome so it can evade security controls at Amazon. Not good, so Amazon’s suing them over that. So again, more security issues to consider.

OWASP has put together a list of the top ten controls for LLMs. There’s another list for AgenTic AI we’ll get to, but the LLMs, there are ten of them, preventive and detective controls.

However, OWASP top ten controls assume you have a comprehensive and continual inventory of all your AI assets. The active LLMs, the embedded AI from SaaS providers, agentic AI architecture components, shadow AI, and even malicious AI. It assumes you kinda have a way to determine that before you go to the list of top ten controls. So with that, I’m gonna reintroduce Richard Bird as the CTO of Singulr and get Singulr’s perspective on discovery and inventory.

Sure. And I think we’re in a good spot now. If we’re time tracking, I think we’ve caught back up. So thank you, everybody. And a really great table setting, Terry. I think there are so many things that come out in what you’ve shared.

And I think the first is the most important. And as we talk about AI discovery and inventory from our perspective, it is a truth that is not being readily admitted in the marketplace or in the enterprise, which is that every failure mode that you just described is not because the design is bad. It’s because our assumptions are wrong.

This assumption that we’re going to back-fit AI into our current infrastructures, architectures, governance frameworks, and control models is wrong.

And it’s wrong in a way that is already provable. This is not Richard’s opinion. As I like to mention about Clawdbot, not Moltbot or whatever its current name is today, the argument that a lot of large security platforms have been making, legacy platforms, is that we’ll protect you from AI, falls flat when you consider the fact that there were none of those platforms that caught Moldbot.

That was surfaced by a research organization.

However, we know that Moltbot was already propagating extensively in a large number of companies. So why on earth would organizations that are clearly stating we can protect you, why didn’t they catch that? And that’s because the assumptions of how AI operates are leading us down a path of this is just we put it back into our current security architectures, and we’re gonna be great. That’s already been proven to be false.

So when we talk about inventory and discovery at Singulr, it’s something that we have to be very, very clear about. Knowing about something is good from account standpoint. So discovery, you hear this frequently said, and I absolutely hate it. I was a longtime practitioner in the enterprise.

You can’t protect what you can’t see. That’s the that’s just so trite. Right? The real statement is you can’t control the unknown.

You don’t wanna see it. You wanna control it. You wanna get your arms around it. You wanna understand the risk.

You wanna understand its exposures. You wanna understand its possible vulnerabilities and attack surfaces, and that’s really where Singulr puts its focus. We’re doing a contextualized discovery. So we find AI inside environments, features, agents, and services.

And when we do, we have two specific capabilities that help us meet the evergreen life cycle management expectations of LLM o one zero nine, and that is this contextualization. It’s not enough to just know that an AI has an IP address or, you know, the geo that it’s coming from. Does it have a published website? Does it have a published DPA? Is it embedded in a current OEM manufacturer, Salesforce, Figma, or Canva?

All of these details are absolutely necessary to be able to control that AI thing. Absent those details, security can only and will ever only ever be reactive.

There’s a proactive necessity here, which is the domain of controls, where we have to have this rich information. So, if we can go to the next entry on the slide.

I’ve already mentioned it, but it’s not enough to create a library or a data store of the AI things. This is a living inventory. It will change daily. Not only will it change because new AI agent services or features are discovered by Singulr, but it will also change because of the versioning of an agent, a service, or a model.

And, each of those versions can introduce new or compounding risks in your organization. So if you don’t have a line of sight on those changes, you will, again, assume that your current GRC functions around sanctioning and approving the use of a thing will meet the requirement. But a one-and-done sanctioning is extremely problematic with AI because AI is dynamic. And because AI is dynamic, it can change behaviors, change things that it’s reaching out to, change agents that it’s communicating with.

You’ve already sanctioned. If there’s not this living inventory and evergreen process of reevaluation, you will continue to be exposed to harm and damage that you allowed in. I always like to equate it to inviting the vampire into your house, and you ask him to stay in the living room. And two weeks later, you’re like, hey.

What are you doing in my master bathroom?

Right? There are a whole lot of conversations around the authorization layer that contribute to that analogy, but we can dig into those later. And then you gotta be able to enable continuous visibility, not just into the inventory, but into the prompts, into the data exchanges, into the model outputs and flows, as well as the outputs and responses and where they’re going out in the interwebs. So we can go to the next slide.

Nice.

Just a bit about Singulr’s, worldview on this. We don’t think that discovery is enough. We believe that contextual discovery is the way to win.

What does that mean, contextual discovery?

Yeah. It’s the enrichment of the actual find. Right? So if I find and I briefly mentioned it before, but if I find an AI service, there are several pieces of information that are not specific to security, that kind of encompass all of the business functionality, logic, and technology, even the, you know, down to the code level.

You know, it’s a recognizable Python-coded thing in the AI space that is extremely helpful in being able to dimension that risk. So the example that I like to use all the time is I discovered an AI service inside of company X using Singulr. Because we have a massive inventory, about four million fingerprints of AI service agents and features that we are collecting. I have a comparative that I’m able to go to, okay.

Here’s this AI thing. And is it in currently in Singulr’s repository, and what do we know about that thing? Where is it located? And the example that I always like to use is I’ve got a new service.

Somebody’s requested to use it. I now have all the enhanced research data part of that contextual record to go, oh, wait a minute. This is based in Syria.

It has no published website. It is riding on mass network traffic. It is, you know, it doesn’t have a published DPA. Those types of pieces of information represent risk, but they don’t represent security knowledge because it doesn’t become a problem from a security standpoint until I agree to use it without all that information, and then I allow it to manifest as a potential exploitable surface.

So the best security is that it can’t bring it back once it’s out there.

Right?

Exactly. The best security in that instance is saying no.

And we provide the intelligence to allow you to make an informed decision about what should and should not be allowed in your environment.

That real-time usage and tracking then nets into reporting. That reporting gives you auditable and traceable information about each one of those agent service feeds as they’re being used, and then that continues to roll into dynamic risk assessment.

Risk cannot be measured in the AI space as a one-and-done either.

You know, a behavior can change, and there may be risky elements that you wanna incorporate. My favorite one is because Singulr ties to identity stores, Okta, Intra, and Ping. And because we’re able to discreetly associate the use and exposure of AI things down to the individual level, you know, a great use case is maybe you have a lab function or a research function that would really like to explore, the website deepseq r one. Right?

You wanna get into it, but you certainly don’t wanna, you know, let everybody in the company use it. And you also wanna use it in a way that it’s isolated and contained so that it doesn’t do unintended damage to your company. So in Singulr’s case, we can take that understanding of the risk. A customer now wants to take and apply the use of or exposure of DeepSeek to one to five researchers in one department.

Being able to isolate allows people to be able to test, experiment, and be able to understand either their exposures or figure out, you know, how they want to accelerate innovation, say, with a service that they wanna try out first but don’t wanna have on for everybody in the company initially. So we can go to the next slide.

Sure.

Now, the singulr solution requires a lot of data, which brings up, you know? So how do you do discovery? Well, the great thing today is that all large EDR platforms, endpoint solutions have readily available information, that we can, that that we can certainly grab easily, and move into our own engines. That’s where the contextual enrichment comes.

So, you know, the argument could be made, well, why don’t I just use solution x to do this? Well, the problem is that solution x only represents your attack surface and exploit surface for the traffic that goes through it. Right? And one of the things that we hear frequently now is why I just get a browser extension, and I can do that.

And I’m like, cool. Tell me how an agent delegating to another agent is going to be caught in a browser extension. Right? This landscape is rapidly growing outside of just the individual user’s exposure to your corporate asset.

And so you have to draw information in from all of these sources. You have to be able to enrich it.

It’s the whole picture that you’re saying.

Exactly. This is not a this is AI security governance and control is not a point solution world. Right?

You have to have a very holistic approach to this challenge.

So that being said, all of those input feeds then net to our AI discovery inventory classification capabilities.

But there’s a much bigger set of control problems as we’ve seen in the OWASP top ten for LLM specifically, as well as the top fifteen for agents that go well beyond just, okay. Now you can control what’s known.

What are the problems that come next that that you’re faced with next? And we can take a look at a couple of those as part of this conversation.

Sounds good.

So this is the OWASP three, which is the supply chain.

And this is just talking about, and if you want, you can take this on too, but this is just emphasizing the same verifying and signing third-party models. Right? So your third parties are using AI and interacting with you, and we need to consider that harm. These are the preventive and detective controls straight from OWASP here for number three.

Yeah. And I do love the way, and I’m friends with a lot of people on the OWASP AI exchange projects, and yeah. Each of the, you know, different top tens. Our good friend, Rock Lamberos, is a great example.

And it’s interesting to see the correct hyperfocus on preventative and detective.

Right? That’s a maturity scale. But there are three types of controls: preventative, detective, and the utopian state of predictive. Right? And, obviously, in the OAS framework, we can’t address predictive because there’s an assumption of a level of maturity across organizations that they need to get to detective and preventative first.

Now, what’s the reason why I state that is that when we look at the preventative detective control space specifically as it relates to third-party risk and supply chain risks, there is a huge gap in the AI space. So huge that if you take a look at the news stories of the last two years relative to any AI bad things, it has almost universally been a third party or a supply chain exposure.

There has been very little reported in the news about self-inflicted wounds, or first-party wounds. These problems are manifesting first in the environments that extend out from us, which creates a serious set of problems as it relates to AI. And Singulr, is super sensitive to this because the real problem with all supply chains, security, the reason why SBOMs were, you know, first developed, the reason why AI bomb is such a large part of the conversation is opaqueness. Right? I mean, the bomb approach is to, you know, force disclosure of all of these relationships.

But that’s really problematic because for a lot of companies, even with a defined AI BOM or SBOM, it is only transparent to certain points. I always like to use the example of my prior company. I worked specifically in API security, and we found a situation with a customer where the vulnerability had actually manifested thirty-seven companies away from the core company. We’re not talking about third-party risk anymore, and supply chain risk is a little bit abstract.

We’re talking about nth party risk. You are exposed to so many thirty-seven. Yeah. Exactly.

Right?

And so this means that we have to address the opaqueness.

Yeah.

This ties directly into visibility through inventory and cataloging, as we do. But now it requires us to be able to bifurcate or separate, which of these things are services that are being, you know, manifest and exposed to you internally, and which of these things are chained together. And instead of chaining together, as I was just saying earlier this morning with somebody, it’s the proxying of proxies. Right?

This extended proxy chain of AI exposure, you have to be able to visualize that, and that’s exactly what we do. We’ll go to the next bullet. Yeah. Supporting, you know, governance over model provenance is a huge problem in the third-party space.

And I mentioned this before. This is why you must be able to see versioning changes within models and services. If you don’t see the versioning changes, you know, you’ll run into situations where somebody will say, well, you know, we sanctioned that, and we’re safe. And we’ve got all the control policies in place; they’ve cascaded to the security controls.

And then you go, hey. Well, did you know that that version changed three versions ago?

And there’s a kinda like this moment. Like, that might be a problem. It may not be a problem, but if you don’t have that knowledge, you can’t reevaluate what’s going on within your supply chain. And that continuous evaluation is absolutely necessary and also supported.

Finally, you gotta be able to create the boundaries. I think one of the biggest problems that everybody’s experiencing is how to force a nondeterministic thing into deterministic boundaries. Right? We don’t want an agent freestyling in our supply chain and coming up with, you know, oh, well, this is a new insight that now should drive this automation, when that automation wasn’t designed or built for that particular new insight.

So this boundary setting is absolutely necessary. And, again, our big thing is giving people, after they’ve gone through risk assessment, after they’ve gone through catalog, the ability to control things. And I’ll put it this way.

Would you rather control, would you rather secure things that you’ve already controlled, or would you rather be stuck fighting to secure things that are uncontrolled? This control plane is absolutely fundamental to being successful in both adoption and acceleration, and securing your enterprise. We’ll go to the next one.

Yeah.

One of my favorite ones recently. So when we talk about third party and supply chain, a lot of times the conversation doesn’t go back to our giant legacy OEM exposures.

And I don’t know where everybody else dates to, but I think I came into my technology career in Windows 3.0.

And it seems like a long time ago, but it also seems like some of the behaviors and problems of three dot one are still manifesting in my agreements and my relationship with Microsoft, and this is a great example. Microsoft has exhibited a habit, and we have caught this a number of times in customer environments. They’ve exhibited a habit of introducing new features, new agents, and new services, all AI, with default on and default all.

Now there’s an argument that can be made, and we hear this kind of thing frequently. Well, we have an MSA in place in terms of agreements with Microsoft, and they’re never gonna do us wrong. Well, I think that’s kinda Pollyannaish because it doesn’t matter what your m & a or what your MSA is with Microsoft. The reality is that this leads to the potential for data exposure, data leakage, and privacy problems simply because you’ve got something turned on that you may not know how it works. You may not know where that data is now being moved to. Maybe it’s going to learning engines that you didn’t you would never approve it for.

And the reason why this is important because the the other characteristic that I think people misunderstand about AI, particularly Gen AI, is that, okay. Well, maybe Microsoft is not doing a bad thing here, and they’re taking anonymized data to move it into their learning engines. You know what AI does really well on the GenAI side? Data reidentification. Given enough data over time, a GenAI engine is actually able to reidentify that data with extremely scary precision. And and and this means that we’re in a new paradigm as it relates to data privacy and data controls.

But, you know, first and foremost, we don’t want third-party providers to turn things on and turn on all of the default capabilities without our awareness.

We should be given the opportunity to determine whether or not this meets our business needs.

What’s re just for, you know, for our viewers here, what does reidentify mean?

So reidentification of data is, is the threat of inference.

So, because of the probabilistic nature of the math that is behind GenAI, if I give it enough data

From a source, and the best way I like to explain this is using a non-AI example. The StrataFitbit hack of several years ago was an inference attack. And the inference attack went like this. The Chinese PLA and its extended hacking arms hacked into Strata, which was a fitness database associated with wearable devices.

Using that information, the Chinese PLA was able to triangulate every confidential and top-secret physical military facility that the United States has. Why? They were able to take the inferred data and use the anonymized data. It was all anonymized, but use the location and location aggregation and go, oh, wow. Look. There are two hundred Fitbits in this place in the desert.

And ergo, we now have a top-secret classified military facility. Now take that to the AI example. AI can take all of this data that’s been anonymized, and there are enough trend characteristics within that data that it can be identified.

Now we know. Got it. Yeah. That’s great. That’s great insight.

Yeah. It’s terrifying because we have no data privacy principles, laws, or regulations that are currently in place, oriented. So Yeah. Let’s go to the next one.

You got it.

Yeah. Watch, keeping an eye on our time here.

Yeah. We should probably so this is just getting to, you know, two more on the old OWASP schedule here. So sensitive information disclosure and improper output handling. This is the OWASP detective and preventive controls.

So let’s run through And and I can make this one easy because we’ve already touched on it so many times.

Right? Which is, you know, the current control set that’s associated with data privacy and data control is extremely problematic within AI. AI, you have to monitor what the AI features of agent services are in order to be able to do effective data privacy and data control. You can no longer do standard DLP techniques, whether it be because of inference or because of, say, cross-coordination delegation between agents passing data that you never authorized for them, and then it goes out through another channel to another provider.

And so I like to say, wait till the days happen when ServiceNow is delegating and transferring data to Salesforce in your organization under your contracts, and you never see it. Right? It’ll be a very interesting day relative to data privacy. Stay in time on task.

We’ll hit the next one.

Yeah.

I like to use this one so much. The most feared application in the enterprise world is Grammarly. In this case, this is a, this is an actual case study use case for us.

The customer brought us in for a risk assessment, inventory assessment, thought they had six hundred enterprise licenses, locked and signed off on, and that was the limit of their exposure with Grammarly.

As we went through and conducted the assessment, found that over twelve hundred people were using it internally. That included freemium as well as personal licenses on corporate assets.

The problem there is that if I’m using a personal or freemium license on the corporate asset, my EULA as an individual user with Grammarly is any data that Grammarly reviews and does grammar correction on.

Training. It’s training on it.

They train on it. Right? So now some of this is on Grammar. They don’t have great telemetry around their licensing structure.

So, you know, you would think an enterprise license would mean I can block everything else out, but that was not the case, and still is not the case today. So, data privacy exposure through a third party, again, we can see all the connection points of the previous concerns as it relates to OWASP and how these technology controls are going to be different in the AI world.

I mean, go to the next one. Yeah. Well, and I just wanna point out the top fifteen OWASP agentic AI controls. And, you know, let’s touch on those real quick. Feel free to run through these.

Yeah. Agent space is new turf for everybody, and we’re seeing a lot of weird, you know, promises. Just give it an identity and put it in a directory, and you’re gonna be okay. Yeah.

Now, I mean, first of all, eighty percent of causes for breaches and exploits for the last thirty years have been the identity plane, which doesn’t give me a lot of confidence that assigning an identity to an AI is gonna work any better. Yeah. Oror magically makes the entity, security space better. Right. It’s gonna make it a lot worse initially.

But, agent delegation, I tend to focus on being so for folks that don’t know, one of my largest corporate roles is global head of identity for JPMorgan Chase’s consumer businesses.

I think extensively in the identity headspace. So I think things like agent-to-agent delegation or default Over-provisioning, which is exactly what we’ve seen with Clawdbot. You know, I get it on the development side. Over-provisioning is always the entry point. Like, just give me access to everything.

And something does not work. Right?

Yeah. And my new, and this is somebody who, and this is super nerdy, but this is somebody who had to deal with an organization where somebody in my company gave SAP sixteen to a new robot that is god rights in SAP, and it was the only way they could get the robot to run. Right? Not what you wanna do if you wanna keep things to secure keep things secure. But there are other problems that are manifesting in AgenTic, and Singulr is building out the capabilities to be able to manage these. We are already AgenTic forward. We are already man managing AgenTic use cases for customers.

But things like long-dated tokens or the AuthentAuthZ plane, where bad things do not happen with AI at authentication. You’re just letting them in. It’s a vampire again. Bad, bad things happen at the authorization layer, where we have very, very little historical control and security in place.

So, for the time being, until auth authorization layer security catches up, continuous monitoring of everything in your environment that is AI stuff is gonna be absolutely critical to ensure that AI agents aren’t doing bad things in your environment. And then we’ll finish it off with what you need for the missing evergreen processes. This is a life cycle, but it’s not a life cycle like the old DevOps life cycle that kinda got to a point, and then nobody ever went back to the other end of the curl. This is a continuous loop, man.

And we’re gonna be we’re gonna be constantly evaluating everything in our environments and have to have the tooling to do so, and that’s what Singulr represents. We don’t need to go through the rest of the slides for the for the, solution in detail. What I’d like to make sure that we’re do we’re doing is if there are questions to answer them, but, you know, certainly to kinda cover our partnership together.

Let’s do that. Alright. This is just some slides showing how your inventory, you know, portion works, and this is continuous. Not just necessarily one time, but it can also be just continuous. Right?

So Yes. Absolutely.

And this just shows all the different components of the technology stack from Singulr. But in summary, we need to consider the benefits and impacts, the human suffering side of it, when building out these LLMs and AgenTic workflows.

We cannot manage. What I heard loud and clear, Richard, is that we cannot manage with classical security controls. We need a new way of doing this. So there are a couple of steps, and I’ll show this in another graphic here.

But to discover where AI may be used in the organization, we need to perform an AI-focused risk assessment, we need to implement AI policies and safeguards, and we need to have continuous governance, to your point. Right? This is that infinity loop. So the partnership between HALOCK and Singulr is very unique in that we’re providing this comprehensive AI risk assessment, where we’re able to combine the experts from HALOCK, the methods using duty of care risk, and then Singulr’s technology platform to provide this.

And it can be either a one-off or a continuous AI risk assessment. Of course, everyone starts out with a baseline AI risk assessment. So that’s what we’re offering up as a sort of framework, and getting going on this is discovery. Where are all of our assets?

Right? And that’s where we can use this AI risk assessment; the baseline is gonna have that as a part of it. We’re gonna evaluate the risk, understand the business use cases, and evaluate that cost benefit analysis for the risk and benefit. What are reasonable safeguards that could be implemented?

Recommend policies, and then, of course, the final state here is implementing policies, enforcing policies, implementing safeguards, and then continuous risk evaluation. So while this is the entire risk framework, that baseline risk assessment is these first two components, which is what we’re actually suggesting as a first step for most organizations. Of course, the governance is gonna come right after that. So with that, if there are questions, we’ll open it up for just a few minutes.

I know we had a little bit of a late start, so we’ll give just a couple of minutes if there are any questions that have come in.

We’ll keep an eye on the ah, I got one.

Do you wanna take that one, Richard?

Yeah. So one of the slides that we went through was one that shows our data collection and UX manifestation of all of the AI categories within, within our tooling.

This one?

Let’s see here. Is it the daisy diagram? Yes. So here you can see, we’re categorizing by, I would call it, a pillar.

You know, how many AI services have been discovered and agents have been discovered in the organization and the center, and how do they break out across each of the different components? So, internally built.

One of my favorite use cases recently that we’re working on was an organization that’s very advanced in the building of their own models and agents, and what they would like to do is do birthrights, identify our assignment to that agent.

So, yes, we can see the agents. We can discover what’s being built internally. We do catalog an inventory. We do risk assess it.

But we’re also seeing folks that are taking it one step further, and they say, it’s an identifier. And now I would like you to become the source of record and source of truth for birthrights to grave for the life cycle of that agent. Yeah. And so, yeah, we’re definitely in that space, and then we also can see embedded AI as it relates to OEM manufacturers and solutions, as well as services where it may just be like Canva, right, is embedded, but it is also a specific AI service around graphical creation.

So we can see the different aspects and then align those to the right categories for your teams to be able to manage them based on either domain knowledge or expertise. Or from a security standpoint, obviously, you want that information being channeled to a SOC or analysts, and we give you that flexibility across all types of AI.

Perfect.

Any other questions that have come in?

Just a little bit about Singulr. And, again, this is recorded so folks can see this, coming in. Any comments you wanna say about Singulr?

I love the company. I want us to succeed, and I think that we’re doing great work. I think that our customer base and our experiences so far prove that. I think that our position that current state governance control standards frameworks, as an example, NIST, AIRMF, our position is is that it only address about thirty percent of the standards towards AI to technology interfacing. Sixty percent of it is still human technology, which we believe is a mistake.

So we have very strong opinions on how to be successful in this space that go beyond just our technology platform, but I was also inform our technology platform. We believe there are huge gaps in today’s current standards, and we believe that we need to implement an immediate compensating control until those standards catch up. We don’t have committees that are sitting in, you know, harumphing with each other for nine months on a particular, you know, a new control or a new standard.

Yep.

We’re in the field dealing with it every day. Day.

Another question here for Singulr is, does it show the difference between various LLM tools like Haku, Op, Opus, and Claude?

Absolutely. In fine-grained detail. Not just the difference between the tools, but also the differences between the versioning within those tools.

Perfect. And then also time and the user account. Right? Who’s using them and for how long?

Absolutely. This is actually where we bleed out into the broader operational benefits. We have CFOs who love to see the usage metrics that we have because they’re assigned to individuals. It’s also assigned that we can roll up to departments corporate-wide. So all of those different pieces of contextualized information have more value than just value for security, and they are being leveraged that way by our customer base.

And it’s just a little bit about HALOCK and the fact that we are focused on Duty of Care Risk, AI risk assessments, AI safeguards, etc. So we’ve been in security space since nineteen ninety-six, a long time to be in information security. So with that, I think we can close here on time. So even though we started out with a little bit of a delay, we wrapped up. So we certainly welcome any questions coming to either Richard or me, and those are the email addresses. Richard, I can’t thank you enough for the partnership, and this is this is a this is a great one. The connection here and the platform you have combined with our resources and risk really combine to make one plus one does equal five in this case.

Excellent. Well, thanks so much. I appreciate this. It was a great conversation.

And that concludes our webinar. Thank you, folks. The recording will be sent out within the next couple of days.

 

Artificial Intelligence (AI) Insights, News, Updates