FUTURECON CHICAGO 2026

Securing the Future: CISO Insights and Industry Leaders Discussing Current Cyber Threats and Strategic Defense Practices

Panelist: Terry Kurzynski, CISSP, CISA, PCI QSA, ISO 27001 AUDITOR

 

TRANSCRIPT

Alright. Thank you, everybody.

Hope we all have a pizza and some drinks to hold us through this.

So really honored to be here.

My name is K. Solomon Okoli, AVP, Global Technology Risk Governance and Controls. I know there was a time somebody actually asked how many GRC people are here. So, there are still a lot of us here.

So, we do keep the lights on, and we help security, controls, and organization really be defensive and all of that. So, I am really privileged and honored to be here and to moderate these panelists, right? So, today we are talking about securing the future. And you guys will be hearing from four of the experts and leaders in cybersecurity.

And these are people who are at the forefront of cybersecurity and risks. They are helping the organization form strategic defense practices and also ensuring that they understand the landscape.

So, how we are going to approach this today is actually to just walk through and understand what it is that they are seeing in the environment. What is the attack landscape?

And then from there, we’ll go through what the solutions? What is working? What is not working? What is it that we kind of need to do much better?

And then from there, we go on to what AI is doing to us. Like, is AI an opportunity or is it a risk? Right? And we also look at other emerging technologies and how that is either influencing or impacting how we defend our organizations against cyber threats.

And we also wouldn’t forget about regulations and compliance and all of that that is kind of coming. So, I’ll have the panelists introduce themselves. Will just start from my side. Brad, if you can.

Hi, I’m Brad Chaffenbiel. I’m Vice President and Chief Information and Security Officer at Paychex. If you’re not familiar with Paychex, it’s the second-largest payroll and human capital management software and services company.

Our claim to fame is that one out of eleven employees gets their paycheck from Paychex.

I’m really hoping that we grow our client basis here, so I say one in ten because one in eleven is kind of awkward.

So we’re all getting paychecks today.

Hi everyone. My name is Jennifer Rayford. They call me the Olivia Pope of Cyber and AI. I am the CISO and Chief Digital Trust and Risk Officer for dollar sign e n x. It’s called Enigma Protocol, and also the CEO of Globesec Advisory. And before that, I was the deputy CSO for Unisys and head BSO for Unisys, point of contact for the White House, and an appointee for a couple of the subcommittees for national policy recommendations to the White House.

Also, I’m a global ambassador for responsible AI.

 

I’m Terry Kurzynski, founder of HALOCK Security Labs.

This will be the this is the twenty-ninth year. We’re coming up on thirty years. And I operate as a security adviser for several hundred of our clients. Was there another piece we’re supposed to give?

What keeps us up at night or something like that.

Keeps you up at night.

Well, the speed at which AI is being adopted by threat actors is what’s keeping me up at night. How about that?

All right.

Thank you. So I’m Oscar Geraldo, AVP of Data Security at Waterton. We’re we are a real estate investment and property management firm throughout the US.

So primarily, I’m responsible for protecting our customers, investors, and residents’ data, which is a lot there.

In terms of what keeps me up at night, Teri, you kind hit on one of the things I was thinking of. But I would say how easily organizations can overestimate their level of protection. That’s one of the big things that stands out for me.

So thank you.

So, maybe start with you, Oscar, for our first question, right? So, we do understand that the cyber threats are evolving faster than ever. So, what is the threat landscape shifting compared to last year?

That’s a big question. So I think, and this is not just compared to last year. Really think threats now evolve in weeks compared to months, how it used to be.

And AI has obviously accelerated that. We’re seeing that across the landscape.

But on the other side, AI is also helping us to improve our defenses. So I think looking at that, but also how we adopt AI this year, we’ve heard talk about companies that are either piloting, they’re looking at rolling it out, testing it, but, you know, also the policy in place, what you can do to manage that. I think continuing to look at what you’re going to do when not if, but when we talk about this a lot, is when that data exposure happens because of AI, whether you’re using ChatGPT or Copilot. So thinking about putting guardrails there, those are the things that we as an organization I am now thinking about and really trying to get ahead of, which is not easy, right?

Thank you.

If I can ask a question.

So Terry, building on how the landscape is changing, do you think attackers are getting better or defenders are getting worse?

So, I’m going answer both of these and dovetail on what Oscar said too.

Threat actors did not hold back from leveraging AI and tools. And so what we’re seeing is real time, new machine speeds that threat actors are able to operate and automate a lot of their processes to actually discover vulnerabilities, the attack surface, work the kill chain, and compress the kill chain in a way that our traditional security responses and reviews of vulnerabilities and where we’re at with the meantime, the vulnerability and a review of this system just will not work.

We will have to, and this we are behind on, is adopt better exposure management, threat exposure management tools, and automate the orchestration in real-time, immediate real-time of the threats. We no longer have the luxury to sort of review it in a committee to say what we’re going to do about this. We’re going to have to respond in real time. We’re not there yet. The threat actors are ahead of us.

Okay, thank you.

Jennifer, maybe I’ll come to you. So, as we understand the landscape and all of the risks involved in managing threats, right? So, what are some of the threats that are being underestimated?

So, I think the I think what’s been underestimated probably is the scale and the magnitude and velocity that the attackers leaned into AI compared to, you know, where we are from a, you know, from an organizational standpoint. So I think that’s one of the underestimations. So, like phishing attacks, I’d be looking out for that because, you know, they’re gonna get better and better and better at that.

So that’s something I’d be looking out for.

Okay. Thank you. And so Brad, in your organization, though, what are some of the most urgent threats your teams are seeing right now?

And what, how is that different from last year?

Yeah. Think what we’ve seen a lot more of this year is deep fakes used for social engineering. That’s that’s been we’ve seen a huge increase in that. And then I’ll give you an example. We have a general manager of our European subsidiary. He got a text message purportedly from our CEO with a link to Zoom, saying, I need you to jump on this right now. He jumped on the Zoom call and there was the CEO, his likeness and image, his voice exactly, asking him, you know, we’re we’re we’re gonna acquire a company in Europe, we needed to wire the funds to complete the transaction.

And it was fortunately through good security awareness training that our GM there realized that that’s not something he would be completely involved with the acquisition of any entity in Europe before the money needed to be wired. So he didn’t do it, but the fact that it was a spitting image that sound sounded exactly like the CEO and maybe other than, you know, some a a little bit of a little bit off enough that he could tell that it wasn’t actually our CEO. It was pretty close, and this is, you know, that’s over a year ago now. So the pace at which this technology is evolving makes it harder and harder to combat and identify.

Okay. Thank you. So we’ve heard about how the tech actors are using AI in enabling themselves and how the landscape is actually changing to their own advantage, and how big fake is also shifting the way we actually train our users on how to detect fraud or detect phishing attacks. Based on that, Brad, could you maybe talk through what the solutions? What are some of the best practices that you organizations, including your organization, is putting in place to make sure that we have the best difference?

Yeah. I’d say a year ago, I would have said, you know, security awareness training and training folks. But this is getting so good. The deep fakes are gonna get so good that now, we really need to focus on technical controls. In other words, deep fake detection mechanisms that can help or assist people in actually identifying them. Because it’s getting to the point where it’s so good that training somebody to try to detect issues with it is going to be difficult as it gets better.

 

Okay. Thank you. And I’ll go to Terry.

Terry, so how are you involving your security strategy to remain resilient in a fast-changing threat environment? I know you consult with several companies. How do you really make sure that they are implementing the right defense strategy and practices?

Well, it’s a broad question.

We want to start by making sure we understand what we are dealing with. So when it comes to, you know, we’ve been talking a little bit about AI and deep fakes.

A lot of organizations, and I’m sure a lot of them out there, too, may think they know where AI is being used in the organization, but there is so much shadow AI being used and just shadow SaaS.

The first step is really to get a handle on discovery. What is the inventory of where things are actually being used? And you can’t do it with just a questionnaire because no one’s gonna answer that honestly, or it doesn’t, you don’t get the real information. So there’s a whole other class of tools out there that we will have to adapt to identify what the uses of AI are out there, so that we can start with, okay, now this is our sphere, this is our world, now we can start to assess the risk associated with it.

Because you ask someone, they’re like, oh yeah, we just use Copilot and we got it all locked down. Really? If you really look at what’s going on in some of those organizations, they might have forty, fifty tools actually operating that they have no idea are potentially causing harm. And so I think getting a handle first on the inventory and then, and we’ll get to them maybe when we talk a little bit more about governance and risk, there are specific methods for how we can assess that risk, then once we actually have an inventory of it.

All right. Thank you. So, I’ll come back to Jennifer. So, how do you balance budget constraints with the need for cyber resiliency? In all of your work as a global advisor, how do you really advise organizations to balance their budget against cyber resiliency?

So, the way that I work with my organizations is basically make sure that they understand one, that it’s important that you look at a strategy and you build out the program from there, and the governance is all kind of baked in on the front end. So, as you build it out from that aspect, you can then start to have those conversations about where the cost and the budget may need to be.

The other thing is, and it kind of goes back to your question before, part of the solution for where we are today is the pivot. Right? So, there’s a pivot that needs to go to identity first, being the defense, and so with that, there is, you know, there’s a part of that conversation that has to happen. So, all of those things and all of the components, that’s when you start to build the budget, and all of those things need to be considered while you’re in that piece. When you’re going to the board, you wanna talk to them not so much about the tools, but you wanna talk to them about basically the impact. Okay. Of what that what that mean when you’re trying to get that approval? Alright.

Just want to. Maybe you want to add. Yeah.

I just want to add on what Jennifer’s saying, that the very thing in which she mentioned identity, if you look back ten, fifteen years, we were always focused on the response. The monitor, the alerting, and the response, and everyone was trained up on their plans. I mean, you look at NIST CSF, which was all about having a plan and tabletop exercises, and know that you can get access to your data and then good backups.

Well, the whole last year, and I’m talking, this just happened in the last year, everything moved now to we have to be preemptive.

And so we have to focus on identity and protection versus the identity and protect versus the detect and response phases of NIST.

And so I think that’s absolutely.

You know, just the speed at which it’s gone, we cannot stick to the old paradigm. We have to go preemptively and actually identify, continuously identify, use the tools, and then, you know, start putting protection to keep them out in an automated way.

I totally agree, and I think it’s so important, and like I said from the pivoting standpoint, it’s a mind shift, it’s a culture shift. But, when you talk about I guess when you look at it from its a defense-centric strategy. That’s where we need to be.

Alright, thank you.

Oscar, we kind of have heard about tools and budgets and people, and making sure that you know we are pivoting from what we normally know, right? So, in that aspect, what framework or governance model has been most effective for you? Are you using this, zero trust, or something else?

So you can almost pick. There’s a bevy out there you can pick from, right? I think it depends on what works for your organization.

So we’ve kind of done a hybrid model of NIST, also looking at recommendations by CISA, and really taking a step back and looking at our organization and what works best for us.

And then also with partners that we’ve worked with that do this for a living, and to help advise us to say, hey, based on your organization and what we’ve seen with others, it may make more sense for you to take a little bit of NIST, but also incorporate some other things too. So we’ve talked in the past, do we go out and get certified for something specifically, whether it’s ISO or NIST or whatever the case is? But honestly, I think for us, the biggest thing is looking at, again, like I said, what works for us best?

Terry mentioned this too, it’s kind of holding ourselves accountable to what are what we might say we’re doing, one thing, but are we actually doing that? And looking back into the mirror to say, okay. We’re not. We really need to look deep into that to say, what can we do to be better, more preemptive?

But the resiliency side, too, as they were speaking about, is also a whole other area. But I don’t want to take up too much time.

No. I just wanted to. I agree with you. I was just gonna say, so the measurements around ROI also come into play there. And then the alignment with your NIS two dot o and also your zero trust principles, aligning that your spend along that is, you know, that’s what I was saying about the governance piece. If you can align those pieces, that will help you with the spend and also the measurement of your ROI. Okay.

One last thing I’ll add to that is that I’ve learned over time is if you have champions in your business who are behind that, that’s huge. Because you may think, hey, NIST is the way to go, but that costs money, there are resources, there’s time, there. Do you have a compliance department that’s working alongside you? If not, get close to them.

Get as many people together as you can from different business units because at the end of the day IT’s not gonna be the only one to say, hey, this has to go through because we don’t pay for that.

Okay. So moving on. So Brett, so we’ve seen there are a lot of leverages in sticking to the foundation of fundamentals that we kind of know, right, and also automation and clear risk alignment.

With that being said, we’ve already started to mention, even from the start, everything about AI, right, how AI is going to be impactful, whether from the defense or the attacker’s perspective, right? So, AI is either an opportunity or a risk, depending on who is using it. So, Jennifer, I may kind of start with you.

So, what should organizations prepare for over the next twelve months?

And the other thing is, what does a future CISO look like?

So, you gave me two questions. Okay, all right.

The first question was around the How do organizations prepare in the face of AI.

Okay.

So, I was just telling the panel we met earlier. I just came back from Tel Aviv, and I was we were it was cyber tech. I was doing an embassy delegation as well as I spoke with on the stage, global CSO stage about this very topic, and it was from a USA perspective. So, I was representing the USA.

So, everybody else on the stage was from another part of the world. And, so I’m glad you asked me this question because it became very clear to me that I think we know this, but I just want to say it. We should be looking at AI from an attack, like you said, a defense. But, at this point, we’ve all just I’ve heard it be said, it’s also a part of your business model, right?

So, that means you’re putting key information, and you’re building out these LLMs and all of this, which is critical information; these are assets. So, you remember the term crown jewels? So, my recommendation would be that you begin to think of that as like your brain trust, right, of your organization. So, you should be doing everything you can to protect your information, your data, and you see what I’m saying?

So, it’s like all of those components are now a part of your business model. So, you should definitely be making sure that you’re putting the right controls in place around that.

The other part, what was the other person? You had a question?

What will the future look like?

Okay. So, the future CISO is going to have to be very in tune with the business. Also, trust is going to be very key because all of what I was just saying about the data and the information and protecting it, it’s all going to be based on whether or not you can confirm or be assured that you have the integrity that you need. Right? So, that we just talked about, I heard deep fakes and all of that came up. When you start to talk about whether or not we can rely on what we’re seeing as we evolve, and it’s happening very quickly, that’s going to be where we will have the biggest problem, which is being able to say, for sure, is that real or Right?

So, it’s going to be trust. So, at the end of the day, that’s going to be another one of those skill sets from a CSO perspective that you’ll need to be able to do. And then the other part is AI, understanding you know the prompt conversations, the skill sets, and having that as a part of your tool set as well. So, we were just talking about this. So, all the other roles have leaned into using AI and understanding it and training and skilling up on it; CSOs will have to do the same.

Thank you. So, Brad, the question I have for you, though, in your organization, right, are the risks of adopting Gen AI tools inside the organization? Like, you do paychecks for everybody, and you have a lot of people’s data.

So, how are you showing that to your users when they have been using Gen AI? What are those risks, and how are you combating those?

Yeah. And this risk is no different for human capital management than it is in any other business where you’re working with sensitive data.

Obviously, leakage is one of the biggest risks. So from the very start, we’ve done things like adopt our own tenants for large language models so that we’re not feeding a public LLM with our data. I do have a funny story to tell of how we got some of those controls in place when when when we when ChatGPT first started this the scene, we had a one of our executive officers took some m p NPS survey data. So I don’t know if you guys are a net promoter score.

It’s a measurement of client satisfaction. And he took this spreadsheet that had, and he sorted it by all the negative NPS comments because he wanted to understand why people were unhappy with paychecks. And he deleted all of the good stuff and left all the negative stuff. And then this was back when ChatGPT still trained on user input by default, and he uploaded the spreadsheet said please summarize these themes on Paychex customer satisfaction and of course, what happened is all of that data ended up in ChatGPT, and after that, you could ask ChatGPT what Paychex customers thought about them, and it gave you all negative responses.

So that was a pretty big wake-up call for us that, hey, we need to have guardrails and controls around Gen AI and ensure that, you know, sensitive data does not leak. But since then, obviously, I’ve put in all the controls that are necessary to make sure something like that doesn’t happen.

Okay.

Thank you for that. So, Terry, you have a follow-up?

Well, you know, I thought I would comment, first of all, Brad, that’s a great story.

And Jennifer, I am not worthy after hearing all that. You are awesome.

So, I do want to dovetail on what Jennifer mentioned about AI risk and some of the things we can do.

We have LLMs, and we have Agentic AI. There are really three things I’ve been talking to clients about that they should think about. With LLMs, there’s a different set of detective and preventive controls out there. OWAS published them; they’re not mine.

You just gotta know about them and add them into your risk calculus. For AgenTik AI, there are fifteen preventive detective controls if you’re building your own AgenTik AI workflows. But the other one that Chris Cronin, one of my colleagues, pointed out this morning was really the newest thing that AI is bringing into the risk picture, which is this public benefit alignment. This is kind of new for people to think about this.

We have to balance the benefit of doing this thing versus the human suffering, right? We actually had a discussion about this before on the panel here, too. So that calculus is a little bit tougher. And I would say that one of the ways that you can accomplish that is the duty of care risk standard, which Chris went over this morning, if you guys didn’t see that.

So that’s it. But I just wanted to comment that I thought that was great.

Thank you.

Oscar, I have a question for you on kind of on this topic. So, within your organization, how is AI changing both offensive and defensive cyber operations within your organization?

So I’ll repeat the question. So, how is AI changing offensive and defensive in our org?

So on the defensive side, I would say many of our vendors are starting to come out with it on the security stack. Right? So there’s a lot there. You go next door, they’ll tell you. So there’s a lot there for us to look at and that we can introduce, and it’s gonna help us.

And that’s just a matter of time. Right? I think we would love to adopt this right away, but we have to look to see what’s gonna overlap, who’s gonna manage that. So on the defensive side, I know we’re on our way.

On the offensive side, I mean, let’s be true. Cyber threat actors are already beating us on the offensive side. Right? So they already know what they can do with these tools.

They know organizations.

They have to go through processes to get anything in place to defend against this. So keep that in the back of your mind. Right? So we’re already five four four or five steps behind when it comes to that. So I think it’s now really thinking ahead in terms of, okay, in the next three to five years, we need to look at what we’re going to do right now to impact the future.

And it’s not just saying, yep, we’re going to go ahead and do that and that. It’s saying, is that gonna be sustainable long term?

Because it’s evolving so often that you also have to think to yourself, is this gonna change in a few years, where what we’re doing on the defensive side is not gonna matter?

But we all know this in cyber and in IT in general, it’s constantly changing. So you have to stay on top of those changes.

Not a lot of sleep. Right? So it’s out there.

But Okay.

I love what you just said, and I just want to add to that, which is that you may hear more and more of this, but I believe that there will be more intelligent governance, more intelligent SOCs, more intelligent data centers. And, what that means is we will, we’ll need to, but should be, and I’m pretty confident that in order for us to be able to shift the scales, that’s what will need to occur is that we will need to be able to defend the attackers with AI.

So, there’s a whole process to that, but it’s intelligent governance and intelligent defense, basically.

Thank you. This is for Terry. So, recognizing that, you know, there’s always the business and regulatory dimensions in everything that we Can you repeat? Recognizing that there is also business and regulatory dimensions around cybersecurity and any technology processes. My question is, cybersecurity is no longer just a technology issue, right? It’s also a business issue. So, how are you driving cyber culture across your organization?

Well, it’s Ross. Across. Sorry.

To repeat that, how are we driving cybersecurity to the business, having it as a business initiative? Is that right?

Yeah. So it’s no longer cybersecurity is no longer a technology issue.

Technology. It’s a business issue.

Yeah. It’s just business.

So how are you driving the culture with the And Jennifer mentioned this too, the CISO of the future is going to have to be able to communicate with the business.

And this is not something new that’s come out of my mouth or anyone from our organization.

The challenge sometimes we have as security leaders is that technical managers will provide technical reports to non-technical executives, hoping they can make a business decision that doesn’t work very well. They just shake their head, and they, you know, eyes glaze, and they don’t know what to do about it. And the answer is not to get them technically informed that they’re gonna know what all these technical vulnerabilities are. We have to be able to translate that into business impacts that they can then make informed decisions at the executive management level and board level.

I know it sounds like a broken record, but the Duty of Care Risk Analysis has already solved that for us. It’s already adopted in Europe and all of our regulations, and all the AGs in the States. We simply need to follow it, okay? And what that means is, just from a technical perspective, for those who do risk assessments, we no longer score based on confidentiality, integrity, and availability impacts on an asset. We have to think about what the impact to our mission, our business objectives, our obligations, and harm to others, and score on those things.

Because that’s what the business actually understands. Like, oh, so we would be out of business and not make money for three weeks. Yeah. That’s the impact.

We have to translate that. And that’s not what we’re doing. We’re giving them a technical vulnerability on an asset, scoring that. And then really what they have to do is re-risk assess our risk assessment to understand what the real implications are because they just want to know, are we okay?

How do we get to okay? They have to do their duty of care. Not cause harm to the business, not cause harm outside of the business to anyone. Not just customers, but the general public.

Alright. Sorry. Thank you. Terry, you sound passionate about it.

I know. Not the first time John’s heard that story.

So, with that passion, I’m gonna open up the floor for some questions.

Okay.

Thank you so much for presenting today. Appreciate all the insights you shared. Question for Bradley. How has your organization been reacting specifically to NYDFS five hundred and some of the requirements around implementation of like cybersecurity controls and potentially dealing with, you know, whatever regulatory implications that might look like?

It’s one of the wonderful things about being headquartered in the state of New York.

It’s getting to comply with the NYDFS regulations. But, yes, that’s an extra set of responsibilities we have as since we’re considered a financial services organization in the state of New York.

You know, you’d have to go over which particular controls you’re interested in. There are plenty of them that have been added over the last two years. But, yeah, for anyone who is lucky enough to be considered a financial services organization with a presence in New York, there’s a whole set of cybersecurity requirements above and beyond anything that you must comply with for NIST or ISO or any of the other things. And I don’t know if there’s a particular requirement you were interested in.

I was thinking more about identity controls.

Yeah, Dave just released some guidance related to MFA that’s I’ll just say that other members of the FSI sector, Financial Services Information Sharing and Analysis Center, we we got together in a in a CISO Congress and just discussed our understanding of what even those requirements are and in all the CISOs in the room not one of us could fully articulate what the requirements were. They’re that vague. So that’s something that we get to enjoy trying to get guidance from the state of New York on. Everywhere, the guidance is so nuanced that nobody really knows exactly what’s required from it.

And at first glance, I think most organizations say, oh, yeah. We got MFA. We’re fine. But then they have very specific requirements even for MFA that most implementations may not even meet.

So, yeah, it’s been fun. It’s been fun trying to understand what our obligations even are, let alone whether or not we comply with them.

Thank you. We have another question.

Hello there. Given the whole AI trend, curious if in your circle of network and your working engagements, are you seeing any insights on how people are taking the approach of upskilling individuals that might be left behind with all the AI revolution, and how the future of a person’s career might change coming out of school these days?

Any checkers?

I kind of touched on it a little bit. So, there is, like, in the consulting world, I didn’t set out to do AI consulting. That wasn’t, you know, what it was for me. But what happened was I was called in to help them implement it the right way.

Right? So, for the last year, I spent basically letting everybody know bake it into everything that you do, make sure it’s a part of your crisis management plan, and do all of that. But what I saw on the other side was I did see that there were a lot of companies that wanted people to come in and show them how to be more productive and different things like that. So, that’s one.

The AI consultant is a big one. I think Netflix was like one of the highest paid positions that they, I mean, was a pretty high position for just people that could come in and consult and help them see how, you know, to be more productive.

But from a CISO and a cyber perspective, I would say being AI literate and definitely having the ability to be a prompt engineer, or even, I wouldn’t say don’t consider it, I would say consider it, you know. I think that’s one of the skills that can take you far, and you can I think it can spread across other things with that foundation?

But more importantly, I think it’s whether or not you are leaning in and figuring out how to make it work for you. Keep in mind, as I said, it’s a defense, it’s an offense, and it’s a part of business models going forward. So what does that mean? That means every company is going to have a need for AI. So, that’s my recommendation.

So, I can add on that. So, what we have done is we actually started with at least having a Gen AI governance and Gen AI policy, right? So every use case has to go through a board before it’s approved. And the other thing is that this year, every employee has an AI goal, right? So we have implemented copilot and people are being trained on how to use it in Excel and in PowerPoint. It’s just making sure that employees are also AI literate in order for them to actually survive in the current workforce environment.

I’ll just add one student comment on that. I just saw last year that the University of Chicago has a certification program for an AI officer. You know, it’s like a one-year program. It’s, you know, for professionals.

It’s eight hours a week, etcetera, and most of it’s off-site, but you can do some on. So that’s interesting. That’s where things are going is These are gonna be disciplines. So having added that onto their student anyway out there, someone new in the marketplace is gonna definitely gonna wanna have that some sort of badge like that or certification, I think.

I think we have it.

Let see.

Thank you. In regard to what was stated by We can’t hear.

Is she speaking? We can hear you.

Oh, excuse me.

You may have to speak a little bit louder. We can hear.

In regard to what was stated by one of the panel members regarding translating from tech speak to C-suite speak.

Yes. You raised your hand. It was you.

Is there any further advice from the panel as far as how you can further refine that translation from tech speak into, for example, in a large hospital institution where you would need to translate it further, perhaps into lawyer speak, scientist speak, human resource speak, so on and so on? Any suggestions on how that can be done? Because they’re all speaking their own language in the C suite?

Well, I’ll start. So, as I have mentioned, you know, they call me Olivia Pope, Cyber, an AI. So, in critical times, crises, different situations like that, and being the head be so at like I was mentioning, Unisys, I had dotted lines to everything that you just mentioned, legal, privacy, you know, all of that.

So, what my process was is I would always start, like for instance, let’s just say, know, an incident or something. I would go down to the weeds. I’d get in to the I would understand exactly what happened, and then I’d start to roll it up. And then I would get that verified and confirm that that was accurate, and then I’d roll it up again and I’d roll it up again and I’d roll it up again.

Once I got to that point where I’d gotten it rolled up, and then it was now time for me to speak to as you said the c suite, that’s when I would then turn my conversation into because they don’t know they they’re not necessarily wanting to know about the actual vulnerability or that number or you know, the CVE number. They don’t wanna know that part, but they do wanna know about the risk, and then they all have a different angle in which they care about the impact. Right? So, I know who I’m talking to, and I know who’s in the room, so I know what to say if that makes sense.

Right? So, if it’s a privacy situation, I know, you know, how that I guess that’s my answer is that you need to know, you know, how to speak to who you’re speaking to, so know the audience. But I would definitely first have the information, and then I’d roll it up. But, you know, just to give you an example, that would be the best way I could describe it would be, you know, if I’m talking to the technical side, I’m I’m I know what level to go to for them to understand.

When I’m talking to privacy, know exactly what I need to say. When it’s legal, I know what I need to say. Financially, I know what I need to say. And they even COOs, and I learned this having just been in the fire and learning, okay, you know, and I had a saying, don’t even know if I can even repeat it.

But the CEO had his he was always kinda looking to the future, and then he was always wanting to know what was going on and should he be worried about it, you know, and did you have it under control? That was kind of how he, you know, looked at it. But then the COO was like, from an operational standpoint, he wanted to know what does this mean? Where are we?

And does this impact the customer? And so he was always, you know, he had his views, but it was always from that perspective. So I just knew what to say andwhere to go. So if anything, that’s how I did it.

But, I am someone who had to speak multiple languages and translate across all of those and be speaking risk. And, risk has been my world from day one, so.

I’m gonna add to that that a lot of the heavy work should be done upfront to define your criteria for acceptable risk. You do that upfront among all the groups and get all the executive management team agreeing to that, then when there’s an unacceptable risk to one of those categories or impact categories that was defined, you’re in the clear. I would say that there’s a way you can download the CIS RAM for free. It gives a framework for how you can put that together. It’s a free tool by the Center for Internet Security. You do not have to use the CIS controls. You can use whatever NIST and HIPAA control combination you want to there.

And just to add to that, the risk register, that’s something that you could use that you can begin to help manage working with your various areas, because each group has its own set of risks. So then they would already know kind of what they are, and then as you roll into it, that would give you the ability to develop the language and the tone that you need for each group based on that risk.

Okay.

Thank you all for all the questions and answers coming up.

I have questions for each and every one of you. Can I ask all of them?

I have questions for each one of the panelists.

So I will start with Terry.

That’s me.

Yeah.

So Terry, you mentioned that the threat malicious threat actors are using AI to outperform cybersecurity professionals.

That we are not there where they are. So, my question is how is it is that threat actors are always ahead of the security professionals while we all live in the same, you know, in the same world?

Well, the reason is they don’t have policies they have to follow.

Right. There are no rules.

They don’t have guardrails. It’s not the same world. They live a very different they for instance, we wouldn’t even begin to think of doing something without the guardrails, and they started and ran with that right away.

And if a couple of people got hurt along the way, they don’t.

Is that within the United States or in Europe? Because I know in Europe they have regulations.

Are you asking if this is a global? He didn’t answer this.

Can I answer?

It’s global. I can tell you. I just saw it firsthand two days ago. It’s global.

Okay. How can we be more defensive? You know, like we are taught to be defensive drivers.

Yeah.

Right? How can we be more defensive to compete in the same realm as the threat actors? Because threat actors are part of us, right?

We go to the same market.

So, and I don’t wanna get too much into the weeds on technical tools, it’s the continuous threat exposure management combined with automating orchestration of the immediate preventive response and recovery, and containment?

I already said it. You gotta shift. You gotta shift over to that identity-centric, you gotta shift. That’s the defense piece that needs to occur.

And that’s gonna lead to that resilience piece.

And then like I said before, I dropped another, you know, bread crumb which was taking and looking at all of the parts like the three parts that I called out and then beginning to protect that and and and really treating that now like a crown jewel and protecting that at all costs. That’s the other way. So again, we’re not thinking of that quite yet, you know where we need to go, but as I said, if it’s already a part of your business model, you’re putting all your information in it, it’s now a crown jewel. So you have to think of it as not only can it be attacked, you can use it to defend, but you can almost think of it as it could be a victim now because it’s a part of its asset of yours.

That’s what I mean. So you need to protect that. That’s like a brain. I don’t know.

Think of it as a big, big diamond. I like diamonds, but just think of it like that. And for you see what I’m saying? It’s a crown jewel now.

So it can be a victim, it can be attacked, or you can use it to defend.

I’ll build on that to kinda answer your question too. I feel like you have to put yourselves in their shoes too to see what they’re looking to kind of figure out what they’re looking for, what of our crown jewels is the most critical to us. Assume that they’re gonna attack that. So the resiliency piece for us has been big, putting on that hat to say, and you hear this all the time, it’s not if, it’s when, when you get attacked, when you get compromised.

How are you going to be resilient after that? What are you going to do to minimize your downtime, whether that’s your backups, whether that’s having an alternative for communications? Look at and I know you mentioned specifically AI, but they’re using AI to accelerate how quickly they attack us. They don’t have to be tech experts anymore, which is scary, right? So for us, on our end, it’s again assuming we’re going to be attacked and hit.

Now, what can we do to minimize our downtime? How can we get back up and running? And so there’s a lot into that, but we can’t keep up at their speed because, as they mentioned before, their guardrails are off. They’re out there.

They’re grabbing fraud g p t. We have ChatGPT. They have fraud g p t. Right?

So they’re out there doing things without guardrails, and that intelligence they’re grabbing off of that. It’s almost a cheat code.

So how are we going to battle that? We have to think about it, all right, where do we get hit? What do we do to minimize that?

Think the other thing too is that their sole job is to just be threat actors. It’s their sole job. So they have unlimited time, they have unlimited resources, they have unlimited tools.

So that’s what they eat, and that’s what they breathe. So that’s why, really, for the technology analysts that are actually defending, that’s not their sole job. And also, there are controls in place. You can’t do this.

You can’t go outside. You can’t use this tool. Before you even start to use the tool or bring the tools into your organization, it has to be vetted. You have to go through architecture review, and by then the track actors are already, you know, harvesting and doing what they need to do.

I dropped one other bread crumb, which was intelligent adaptive governance, intelligent data centers, and intelligent SOCs. We have to go there, and we will have to put that in place so that we can defend at the same speed at which they are attacking. That’s what that’s how we’ll get there. We as humans won’t be able to do it because they have AI attacking us today.

Okay. I have a question for you, Jennifer, on that. On the intelligence intelligence governance, and defense.

Could you define this concept?

Could you define it and clarify further so that we understand exactly what we should be doing, right, per intelligence governance?

So what I will say is that I don’t know how much more time we have.

So one thing. So and so, I would start by doing a couple of things. Right? So one, I would make sure that somebody mentioned, I think it was you, inventory all your identities and your AI assets.

Do that. Everybody, if you haven’t, go ahead and do that. The other one is beginning to start running your tabletop exercises simulating AI Attacks. Attacks.

Right? You’ll start to kinda see, you know, what what I’m what I’m talking about. Then after you kind of see well how that plays out then begin to use the frameworks that you gave that you you already gave them. And then start to harden your controls from there.

That will put you miles ahead just from that standpoint. And what I mean by intelligent governance is that you have the governance just like we have governance today. But now, you’re going to bake in, you’re gonna have a line where there will be things that AI can do that will be commutative. Right?

That they will do that will speed things up, and then there will be things that, from a governance perspective, you will not allow, that you know, let’s just say for instance, you know it could be lives could be at stake or different things like that. So, those would be things that will still remain on the judgment side, the human side, right? But, those are the That’s the way we need to lean. So, we’re still doing it responsibly and we’re But, we are we’re scaling up our if that makes sense.

We’re scaling up our defenses by being able to leverage AI like they’re leveraging it.

Right. Okay.

So I think, sorry, we are almost we just have a few more minutes to go. So is there any other question from any person?

I think we have it at the back.

Sorry.

I think one more question, and then we’ll go.

I’m going to try to answer this question in the best way possible because I reformulated it in my head many times.

What are your guys’ thoughts on identity access management? And do you see AI being leveraged through threat actors to affect identity access management? Or are there any potential risks that AI can pose? Or do you just see, like, threat actors leveraging AI to attack someone’s identity access management? I hope that makes sense.

Well, I’m going to just say I am in trust. It’s so it is the way. And so zero trust I am it is it is the way to go. And everything that I’ve been talking about is about that.

Is is about you know that shift over to identity.

Yeah. And when you talk about agentic AI workflows, the model context protocol or MCP gateway servers are where all these identities, access, and secrets are gonna be leveraged. I mean, that’s how the last Anthropic one happened. They leveraged Anthropic and went after the MCP gateway servers. So, you know, that just happened. It’s real. That’s how it happened.

And provenance will be the gateway to trust. So, it’s all through identity.

Okay. So, thanks everybody for your questions. I just have one.

I do have one last question for the panelists to kind of close out.

So, each of you, what is one tool or technology you can’t live without in your Is this business or personal?

Based on what you think about it.

Brad, do you want to go first?

Without.

Yep. Technology.

I mean, in your security Ask me tech, it can’t be my people because that’s the most valuable asset I have for your people.

That’s cheating. You can’t do that. That’s cheating.

I like that.

I know. From a tooling perspective, I guess I’ll have to go with the EDR. That’s probably the thing I would worry most about losing.

Well, dollar sign e n x. That I mentioned was a privacy crypto token that we launched, but it’s behind Rabbit technology, which is around what we’ve been talking about, the zero-trust identity piece. So I would say, Rabbit technology. Don’t wanna live without that.

Gary? I’m gonna say reasonable risk dot com because it’s the way you communicate to your executives what you need to get the budgets you need to get the tools you need. There you go.

That’s a good answer.

I would say I identify all day. Think any kind of identity tool that’s notifying me of what’s happening is huge because, to the question that you just asked before about identity, that’s really the biggest attack vector, think, and going forward, having eyes on that. So that’s one I would say I couldn’t live without.

Are we doing personal or not? We’re just doing business-related, right?

Whatever.

We don’t have too much time.

Thank you so much. So I know today we have from these amazing panelists around how the threat landscape is accelerating, that resilience requires deliberate prioritization. And AI is both a force multiplier and a new risk surface. So huge thank you to the panelists, please.

And thank you all for joining, and do enjoy your day.