Transforming Vulnerability Management: A Practical Guide to Continuous Threat Exposure Management (CTEM)
In this webinar, we will break down the CTEM methodology outlined in the article, Threat Exposure Management — What It Is and What Problems Does It Solve.
View the Recording
Session
A Practical Guide to Continuous Threat Exposure Management (CTEM)
Speaker: Erik Leach, CISSP, SCF | CISO, HALOCK Security Labs
More Session Details:
Traditional vulnerability management floods organizations with data but often fails to answer the questions that matter most:
Which threats pose real business risk? What should we fix first? Are we actually reducing exposure over time?
Continuous Threat Exposure Management (CTEM) provides a more accurate, business-aligned approach by shifting from periodic scanning to a continuous, risk-driven evaluation of your true exposure. Rather than focusing on the volume of vulnerabilities, CTEM emphasizes understanding your assets, the likelihood of exploitation, and the operational impact of an attack.
Attendees will learn how CTEM unifies asset discovery, risk-based vulnerability management, external attack surface management, and automated penetration testing into a cohesive, measurable program. Participants will walk away with a clearer view of how to: Identify which threats genuinely matter to the business. Prioritize remediation based on real-world likelihood and impact. Continuously track and communicate risk reduction. Align cybersecurity efforts with operational and executive priorities. Join us to learn how shifting from vulnerability counting to exposure management can transform your security program into one that drives measurable, meaningful risk reduction.
TRANSCRIPT OF CTEM / EASM WEBINAR
If you have any questions, please feel free to enter them in the chat, and we will answer them as they come up and as appropriate. Thank you. Enjoy the session.
Thanks, Rosanna, and welcome, everyone. And I appreciate you all spending your, probably a lot of you, your lunch hour with me.
A little bit about the series, before we get started on the content.
This is a new series we started for 2026. Our intention is to educate on different topics throughout the year related to cybersecurity. They could be issues we’re facing, or they could be new technologies, or both. Right? But, really, we wanna stick away from talking about, like, specific products, really just solution approaches. Let’s focus on that, how they work.
We can always talk products offline, but, really, this is just to, again, educate. We’re gonna keep these presentations to thirty minutes or less. That’s our goal.
And, we’re gonna target one of these every six weeks. So, hopefully, we have a lot of good content to share with you all.
A little bit about HALOCK. We’re a security consulting company that’s been around since 1996. We do a lot of assessment work. If any of you or any of our customers know, you know we do a lot of risk assessments using our Duty of Care methodology.
Also published on our CIS partner, where you can go and download our CIS RAM, which is our methodology that we kinda gave away. We do a lot of compliance work, PCI, HIPAA, CMMC, those types of things. Lots of pen testing. We’ve done pen testing for a long time. All types of it, everything from internal, external, web app, red teaming, you name it. We do cloud assessments for Google, Azure, and AWS, and we do some attack path modeling as well. So, basically, we’ve come up with an approach using CIS and MITRE sort out threats and categories of threats, where we can basically inquire and discover what kind of controls you have in place, and then basically model out the very common attack paths that have been defined by MITRE, such as ransomware, insider threat, etcetera.
And then finally, we do forensics and incident response. We have a hotline. People call it all the time for us, usually on a Friday afternoon, and then we can do full forensics on all sorts of things.
A little bit about me. Rosanne gave you a preview.
I am the CISO for HALOCK and Reasonable Risk. I’m also the practice lead for our cybersecurity solutions engineering and forensics group. And, really, my main function at this company is to solve problems. And due to the nature of the work and the work that my team does, we get exposed to multiple incidents, and we see lots of examples where a weakness in one or two security controls leads to an incident and an exposure breach. Right?
So we really focused on trying to help people identify where they have those gaps and then harden them from these attacks.
Alright. So for today, we’re going to be talking about Continuous Threat Exposure Management (CTEM) and, where vulnerability scanning falls into that equation, why it’s falling a little bit short these days, what CTEM is, because it’s a little confusing what it is. We’re gonna talk about several different approaches, how vendors are approaching CTEM with their solutions. We’ll compare them all, and kind of talk through the CTEM process of how they’re typically run within an organization.
And then how do we transform and get to that sort of CTEM continuous vulnerability management place where we all wanna get to? We’ll talk about next steps, and then we’ll go through some q and a. There will also be a couple of polls at the end, so please stick around for those.
Alright. So let’s start with vulnerability scanning.
We all know what it is. We’ve all done it, I’m sure. It’s really the process of looking for known vulnerabilities and scoring them using a system called Common Vulnerability Scoring System, CVSS for short. They’re scored critical, high, medium, low, and informational.
Now this has been around for twenty years, or so, I believe. It’s a little over twenty years, and it’s been helpful. Right? It really helps us kind of prioritize what needs to be fixed first.
However, when you run them and the larger the environment, the worse it is, it often results in hundreds of findings.
And the problem with that is that we all know we have limited time that’s available and resources to remediate these vulnerabilities. Right? You have to do these things off hours on weekends because they take the system down. So they require reboots. They require restarts of services.
The other problem that has developed over the last few years is that we’re not looking at the vulnerabilities in the context of the risk to the business and the risk to the actual overall threat risk. Right? These are kind of like, well, if this vulnerability is exploited, this could be bad. But, oh, based on your asset, potentially, it’s a lower threat because that asset’s just a static web page, or maybe it should be a higher threat because that asset interacts with sensitive data. So we’re lacking that content with vulnerability scanning today.
The result of that is that we tend to stretch out the period of when we’re doing the scans. Right? So we had the best intentions. Hey.
We start weekly. Oh, we can’t get all those patches. Let’s go to the monthly because I don’t need you to tell me what I already know. Oh, let’s go to quarterly because we’re not fitting in, you know, the patches that we’ve identified, and it just kinda exacerbates the problem.
And then the last thing, and it’s kind of an important note that, typically, these solutions for vulnerability scanning require whitelisting. To get around your security controls, you have to whitelist the scanner at your firewall. You have to whitelist the scanner at a web application firewall (WAF), and add an endpoint if you wanna truly find all the vulnerabilities that exist in your environment.
So the problem ends up being, what should we address first? What’s the most important thing for my IT team to patch? What can wait? Right? Because if you have, like, a fifty criticals and a hundred highs, which one of those criticals and which ones of those highs should actually be patched first?
So the security industry has responded to this problem, and they’ve created multiple approaches for addressing it. And they all fall under an umbrella that’s called Continuous Threat Exposure Management. You’ll hear me refer to it as CTEM. That’s what the industry refers to it.
So what it’s aiming to do is basically provide you with some additional risk and threat context, not just criticalities based on CVSS, but other factors such as, hey. Let’s give some context on what these assets are and what they mean to the company. Is there sensitive data? Are they an important function, etcetera?
And let’s use that to color the findings and the ratings of the threat categories.
Also, introduce basic threat intelligence of the vulnerability. Hey. Have other people experienced breaches based on this vulnerability in the wild?
How likely is that vulnerability to be exploited within the next thirty days? They call that an Exploit Prediction Scoring System, EPSS for short. Talk about that a little bit later. And again, all these things are giving color to the findings now to give you more of an idea of whether this is a higher high or a lower high. What should be addressed first?
So the next few slides, we’re gonna dive into the approaches. And I’ve done a lot of research and reading and talking to people and getting demonstrations of the different approaches, and they generally fall into three sorts of categories, and we’ll go into them. That’s risk-based vulnerability scanning, external attack surface monitoring, and automated penetration testing.
So the first slide we’re gonna start with is Risk-Based Vulnerability Scanning, RBVS.
And the objective for all these approaches is the same.
It’s to provide you with more impactful vulnerability scores to aid you in prioritizing your patching.
So the RBVS approach allows the application of risk scores to assets. So a finding that is for an asset that has a risk score of three, which is on a one to five scale, right in the middle, versus maybe an asset where you provided a risk score of, like, a five, you know, more risk, more threat, more impact to your business, those could be rated differently now. Right? The one that is a five with that same vulnerability is gonna be higher versus one that’s rated as a three, for example.
These, just like a vulnerability scan, are scheduled to run frequently.
The frequency is definitely up to the business and the risk that you have in your environment, again, of your assets.
It still requires whitelisting for the most part to get them to run successfully. So you are circumventing your security controls, but your goal here is to find the vulnerabilities.
But they do not.
Vulnerability scanning and this do not validate whether your vulnerability is exploitable.
So think about it from an attacker’s perspective. They’re looking for vulnerabilities, and then they’re going after vulnerabilities to see if they can exploit them. This does not try that last step of exploitation.
So to provide an example of a, you know, a client outcome that has used the RBVS approach, as I was saying, there’s a big vulnerability backlog. Patching’s an issue. They never get around to it. Criticals on high stay around for a long time. So they shifted to a model like this and this type of solution, which resulted in fewer findings that really needed to be addressed for remediation. And that really improved their efficiency and the confidence in the vulnerability management program they had as a whole.
Now that was RBVS. Now we’re gonna switch to External Attack Surface Management (EASM). This is a different approach, but it’s additive. Okay?
The objective is the same. Right? We have the same objectives. We’re trying to make patching and remediation better.
They do allow for the application of a risk score like our BVS that we just talked about.
These are designed to continuously monitor.
The difference that’s additive here is that it’s exploiting the vulnerabilities automatically and coming back with the evidence.
So then, very much like a penetration test, if you’ve ever had one of those before, you see in those reports that it’s got the evidence. How did we exploit this? Here are the steps to exploit it. Here’s the evidence. That’s what this type of approach provides.
So now you’re down to a smaller subset of vulnerabilities to address because, from an attacker’s point of view, these are the ones that I can exploit.
And, oh, I have a risk rating with these assets too. So, the higher risk assets, I should prioritize over the lower risk ones. Okay?
Testing occurs. This is the other big difference with your security controls in place.
So with your firewall, with your web application firewall, with your intrusion detection and prevention, right, it’s going to try to work around those security controls to exploit. And so it’s a really good indicator if your controls are working because if you come back with a little amount of findings, that means your security controls either are working very well in combination, or you don’t have many vulnerable systems and assets that are out there. Right? So that would be a great case.
The downsides to this approach is it does increase your traffic while testing. Vulnerability scanning does that. RBVS does that. But this is just another level because it’s actually trying lots of hacker attempts to make those exploits happen.
So you’re going to see an increase in traffic during the scanning windows.
Why is that a concern? Well, if you do have assets that are on the edge of their performance capabilities and, like, an increase of a hundred requests might knock over your web application, for example, well, then you’re gonna wanna scan during off hours, right, off peak, maybe in the early mornings, maybe on the weekends, whatever that’s appropriate for your business.
And right now, these solutions, the other drawback is really focused on external-facing assets because, you know, that’s how an attacker sees it. Right? So, even when they get into your internal environment, they still have to compromise a weakness or a deficiency somewhere to get there. Right?
So a use case for this is, or a client outcome, I should say, is that there was a mid-market organization that had multiple cloud instances. Maybe they were in AWS and Azure, and they have a dev team that is continually spinning up and down environments and basically introducing risk and not introducing vulnerabilities unintentionally. They’re just doing their job, but it’s happening. Right?
And unless that architecture in that network is configured to isolate, like, your dev and your prod and your QA environments, for example, there could be a chance that an issue in one location of those cloud environments is gonna lead to an issue in other parts of your cloud. So they implemented an EASM solution, and they were able to identify multiple high-risk exposures and identify assets that they didn’t even know about, right, because of that development team sort of spinning things up and down. So now they were able to get a handle on it. They were able to identify the vulnerabilities that could be exploited, and they were able to target them.
And then the first remediation cycle knocks out, like, ninety percent of those vulnerable findings.
So big win for them.
Alright.
A new approach, also under the CTEM umbrella, is automated penetration testing.
And you noticed, basically, as I’m talking about things, we’re moving kinda up the stack a little bit from foundational to optimal.
So the objectives are the same.
It does validate exploitable vulnerabilities.
The thing that’s additive for these is they’re a bit more sophisticated in their ability to execute the exploits and, basically, demonstrate those exploit chains to the user of the solution. When I say exploit chain, think of it as an attack path. Right? You’ve got the start where you’ve got, like, this hacker that’s looking for vulnerabilities, and it does an exploit. There may be multiple exploits that the attacker took advantage of to get to their objective, which may have been ransomware, stealing data, or using resources, etcetera.
What these solutions are really good in at is displaying that attack chain all the way through. And let’s just say, for example, you know, I had five vulnerabilities that were identified to get from the beginning of an attack to the objective.
And let’s just say three of those vulnerabilities, like one, three, and five, couldn’t be exploited unless you exploited two and four first. Right? So, where is your priority gonna go? Your priority is gonna go to addressing two and four because without that, one, three, and five cannot be exploited. So that’s what I’m talking about with an exploit chain. Again, these tools are really good at showing that and really pinpointing to you, like, this is the vulnerability you need to fix, and that will take care of the rest of these other ones.
Drawbacks to these solutions right now, they don’t typically allow for the application of a risk score, so they’re still kinda asset-unaware. They’re still finding vulnerabilities and really good showing the exploit change, but is that asset important? These tools don’t know. There may be a need to whitelist these things at least in the beginning, for the first few months when you use this kind of approach, because you do wanna find all the vulnerabilities, especially internally.
And then once you’ve got a good idea of what’s going on, you may want to run these with your security controls in place and see what they can do.
They are typically more focused on the internal assets or network. Some of them will go out of the internal environment to go and attack an external asset, like something in the cloud in Azure, for example, they will try to sort of basically go and unencrypt passwords to see if they, you know, we can get passwords to get assets to access to other assets.
But they do typically start internally. It depends on the solution.
So a client outcome for this is kind of a larger company that implemented this type of approach, where they had this hybrid of internal and cloud assets, and they had assets all over the place. Right? And, they were able to basically validate some exploit chains and then patch those specific systems where the past led them to. And then you’re going after, like, root causes now, which is that initial exploit versus the five that I found in my example.
Okay.
Manual penetration testing.
That is always important, and none of these solutions necessarily replace manual pentesting. Although automated pentesting might replace some type of pentesting someday, not there yet. But, yeah, AI, everything is getting there. Same objective, obviously, validating exposed vulnerabilities, and it’s really useful when you need some granular testing with some human ingenuity, where a person can make some connections that maybe a scripted or an AI solution cannot, just based on their experience and what they’ve done before. Right?
They typically go low and slow because they’re trying not to knock over anything in your environment. And if it’s a red team exercise where they’re actually trying to simulate an adversary, they’re certainly gonna go low and slow because they do not wanna get detected.
You can test where the assets are located. So if they’re internal, external, cloud, you know, wireless, mobile, you can get all those tested as part of a manual pen test. So no restrictions there.
In my experience, they don’t typically consider the risk of an asset as part of the findings. So, very much like the automated pen testing, they don’t really have a risk associated with an asset where you can kinda color a finding up or down. Right?
And then, typically, you do have to do some whitelisting depending on the type of test. If it’s a red team adversarial test, no. But if it’s something that we call, like, an assumed breach where you’re saying, oh, I’m assuming someone got into my internal network, and I’m gonna start from that point with no credentials.
You may or may not need to whitelist for that. Certainly, if it’s an internal or an external test, a pen test, you most likely will have to whitelist some things. So it really depends on the type of test.
Also, one thing with manual pen testing because it is humans, right, and there is effort involved; effort, great effort equals cost. Right? So a lot of times for the pentest, you have to limit the scope. Hey.
I want you to only look at these assets, these web applications, these IP address ranges. Right? Where the other approach is, there’s no limit necessarily on what you can look at. Right?
So these are intended, again, to be much more granular and specific, and are definitely warranted. But because of that, because of the scope, because of potentially the cost of doing it, they’re typically performed once a year.
Remediation verifications are typically performed once a year. Right? Whereas with the other approaches, you can just do those continuously.
Alright. That was a lot of information. Let’s try to look at an example of the comparison chart here. Basically, the things that I wanna highlight, you know, we see all the different solutions we covered here, some rows that are interesting.
Right? So, as we talked about the risk-based threat severity ratings, you can see that there’s a difference between the approaches. Right? No for vulnerability scanning.
Typically, no for manual penetration testing. A little mixed because, and we’ll talk about this a bit for automated penetration testing.
Yes and no.
For the validated vulnerabilities, this should be pretty evident. Right? In the vulnerability scanning and RBVS, there have been no exploitations tried. But for the other approaches, yes, there are.
And then the scope of testing. When I say scope, is it external, internal, web app? Right? Vulnerability scanning, RBVS, is not for external or internal.
Some of these solutions actually do have some, what I would call, legacy web application testing, where you provide credentials, and it’s gonna try to run a script to see if you’re following good, you know, security code hygiene, coding practices, etcetera. But that’s pretty much the extent of it. And you’ll see in the other ones, it’s kinda mixed as well. Right? So EASM is really focused on the external. It does web app testing, but it’s really the OWASP sort of top ten methods that it’s trying to do to hack your system.
Automated external is partial, as I mentioned before, because some of them will follow from internal out. Some of them are focused more on internal testing.
And then, again, the web app testing, some of them will test external web apps, but some of them won’t.
And then, with manual penetration testing, you could test the asset where it is. So there are no real restrictions on what and where you can test.
Now, just a quick note, I’m gonna draw on the screen here.
Basically, where I see this all going in the next six, one year, two years, is that you’re gonna have vulnerability scanning, and they’ve already started doing this, add on this capability here. Okay?
And some of the big ones have already enabled RBVS. Usually, it’s an add-on that costs more, but it we’re we’re already seeing that right now.
EASM, well, they’re going to start incorporating vulnerability scanning. Right? So, potentially, you can get rid of your vulnerability scanner. You can get rid of your RBVS solution because it’s gonna be incorporated within the EASM.
Where they’re also going is they’re gonna go down here, and they’re gonna add internal testing. So now they can handle all of your vulnerability scanning with the risk, basically, and finally, they’ll also start adding attack exploit chains that the automated pen testing does. So they’ll continue to evolve. And, of course, automated pen testing is going to add on risk based vulnerability scanning.
Right? So these all are gonna start, I predict, within the next year. You’re gonna start seeing acquisitions, mergers, add-on capabilities and functions, and these are gonna all start coming in one solution.
Alright. Let me clear my screen out here.
Right.
Okay. Interesting. Excuse me. One moment. It is not giving me the option to erase my screen.
Let’s see what the next one.
Yeah. At the drawing still. Just give me one moment.
Okay.
Alright. We cleared that out. Let’s go to the next screen.
Alright. This is basically the process of running any of these Continuous Threat Exposure Management programs. So you all start off in the upper right at Discover. They’re all gonna start off with some sort of discovery. Some will use top-level domains. Some will use IP address ranges. The goal is the same: to discover all your assets.
There’s going to be an analysis stage. Right? And I have it split up into two based on the type of solution. One is they’re all gonna look for vulnerabilities, but then some of them are going to apply risk scoring. And so that stage will happen, and then we’ll go to a validation stage, and that’s only really applicable for the three on the right.
Basically, EASM includes automated and manual pen testing. And then, finally, everything out of here comes with a report.
So what do we need to do to create a vulnerability management program? We gotta start with a baseline assessment, and then the ongoing assessments are where the magic happens.
You’re gonna get stuff like average threat score over time. You’re gonna get validated exploit statuses, meantime to remediation, new asset discoveries, and new exploits for existing assets.
And then you can use that information to set thresholds for what is acceptable and what requires immediate action, and then measure against those thresholds on a weekly, monthly, quarterly basis, whatever is best for your organization.
Okay. First poll. What I wanna know is where you are at in the CTEM spectrum, from that left side foundational to all the way to the advanced, or none of the above?
Okay. Let’s wrap up the poll and see what we got.
Okay. So we’ve got wow. The majority of you are at the external attack surface management. That’s wonderful.
And then we see we got some automated pen testing here as well. So, yeah, a good mix of folks here.
Looks like most people are either intermediate or advanced, so that’s wonderful to see.
Alright.
So what next?
Depending on where you are in your journey for CTEM, right, is it enough? Are you doing what you need to do? Are you getting the value you need to get? Is it reducing your remediation activities?
Right? Based on where you are, figure out where you need to get to. Right? And if you need help with that, please contact me or HALOCK.
We can have the discussion, discuss what tools you have, what you’re struggling with. You know, we have lots of ideas for how to enhance, you know, those types of capabilities. I will say that, you know, this draft or this presentation is gonna be sent out. Lower right hand side, there’s a direct link to book time with me.
The, ten peep the first ten people who book time with me, if you would like an EASM engagement, which has a minimum cost of, like, $9,500 for free, contact me, and we can get you set up with one.
And then q and a. And I know we’re a little. I didn’t hit my first goal of keeping it under thirty minutes, but I do wanna allow for some q and a. If you wanna just throw them into the chat, we can answer the questions. If you want to send them later on, after the meeting, or talk to me about them personally, we can do that as well.
Okay. So there have been no questions. Although one while I was giving this presentation to my son to see if he could understand what I was presenting, He did have one question that was pretty interesting, which was, hey. Where is HALOCK in this whole thing?
What are you guys doing? Right? And so what we’ve done is we’re doing EASM because that meets our objectives. We think that’s right now, that’s the place to be for us.
That could change in six months, but we were monitoring all of our external assets with an EASM solution.
Okay.
One more poll, and then we’ll wrap this up. For the next webinar, what do you think we should discuss? I suggested four items, but if there’s something else that you’re interested in or it’s something that you’ve been thinking about, also put that in the chat. We’ll record that and consider that for future webinars.
Okay. Get those selections in there, and let’s see what we got, Rosanna.
Okay.
How to do an AI risk assessment. Well, that’s a very hot topic, and I believe we’re presenting on that at the beginning of February. So look out for that invite, and we’ll most likely do that topic as well for the next one.
Alright. So thank you, everyone, for sticking with me.
Just to wrap up, we are gonna be at FutureCon on January 29th in Chicago. You can attend, you know, in person or virtually. We’ll have a table there. I will actually be there. So if you wanna come by and say hi, great. Love to meet you.
It is worth ten CPE credits if you go to the conference or attend virtually. So, yay, (ISC)2 members, right, for your certifications, and you can RSVP at HALOCK dot com using that link below.
So with that, I wrap up the presentation. Thank you very much for attending.
Rosanna, I didn’t know if you wanted to say any words at the end of this, but otherwise, thank you, everyone, for attending.
We appreciate it. We will send out a follow-up email with a link for the presentation and the ability to schedule time with Eric. As he said, the first ten folks who schedule a meeting with him can get a CTEM scan. And then you’ll also have the link for registering for FutureCon in Chicago, virtual or in person. With that, thank you, everyone, for attending, and have a great rest of your day.
Turn it over to you.
MORE ARTICLES
Continuous Exposure Awareness, Practically Speaking
Preemptive Cyber Defense – A natural evolution
Threat Exposure Management – What it is and what problems does it solve?
Review Your Security and Risk Posture with EASM and CTEM
Be one of the first to claim your free scan.
