AI Cyber Threats and Strategic Defense
In the FutureCon panel discussion featuring cybersecurity leaders, the focus was on the transformative impact of AI on both defense and attack strategies. Terry Kurzynski, founding partner at HALOCK Security Labs, served as a panelist. Organizations are increasingly adopting cloud technologies, necessitating new skills and a shift in risk management approaches. The conversation highlighted the importance of balancing readiness and resilience, as cyber risks have become a board-level concern. Effective communication between security priorities and business goals is essential, especially in light of emerging threats like deepfakes and quantum computing. The panel emphasized the need for robust governance policies and third-party risk management to safeguard mission-critical systems.
PANELIST: Terry Kurzynski, Partner, HALOCK and Reasonable Risk | Board Member, The DoCRA Council
TRANSCRIPT
So thank you, everyone, and good afternoon, everyone. So welcome to our panel. And I would like to welcome our panelists, Terry Kurzynski and Mark Morse, Paul Meff, and Cicero Shimano.
Yeah.
So let us start with our introductions.
Please go ahead and introduce yourself quick one minute.
Time. Alright. This is Terry Kurzynski. I think I was already introduced as a senior partner with HALOCK Security Labs. Glad to be here.
Yes. Mark Mowers, Landmark Credit Union.
I’ve been with Landmark for going on nine years.
Our headquarters is right around the corner. And so if you’re Wisconsin-based, we’re an amazing credit union.
Alright.
Paul Neff, Wisconsin Department of Public Instruction. Happy to represent the public sector on this panel. I’ve been with the department. We are the department in Wisconsin that supports K-12 schools, primary and secondary education, and public libraries.
And it’s exciting because this job, which I love, has nothing to do with my prior twenty-five years of experience in the United States financial sector, primarily at the Federal Reserve in several Federal Reserve jurisdictions and the US Department of the Treasury.
Glad to be here. Thank you.
Cicero Chimbanda, CISO and CIO of Loop Capital Markets, wears dual hats and has been with them for about sixteen years. But I also, which is a passion of mine, which is he was in a financial firmhttps://www.halock.com/industries/financial-services/. Now he’s in the Department of Defense. Paul is.
I am also on the education side, making my way there, teaching. I teach cybersecurity at the city colleges, but I also sit on the board developing curriculum for cybersecurity for the next pipeline. So that’s a passion of mine, making sure we’re giving back to the next pipeline.
And I’m glad to be here.
Right. Yeah. I’m Kasi Paturi.
As you said, like President of Pan Asian American Business Council and the CEO of Global Cybersecurity Initiative and, chair a few organizations and conferences here.
And, I want to see I want to recognize some of the ISS people here. How many are there? Can you please raise your hands?
So I think we need to have more. And so recently, we started an ISSA special interest group that is addressing the board of directors. Those who are already on boards or like to join the boards, please join the ISSA and learn, and we are going to mentor them and can get into the boards.
So we need a lot of leadership for the future, particularly in the cybersecurity space.
So that is one that is coming up on November eleventh, one webinar. Go to ISS International and do that. Okay. Coming to our panel now. Let us focus on the strategic and risk-oriented operations.
So let me start with Terry.
So as the cyber landscape continues to evolve rapidly, how are CSOs recalibrating their risk models and security strategies to keep up?
Well, I think the big topic of the day is AI (artificial intelligence). Right? Every day, everyone is introducing new AI models. Everyone’s talking about AI. Quite honestly, I work with our client base, which I talk to frequently every day.
A year ago, a lot of them were just considering it. But now they’re implementing, and they’re implementing in a way that they’re not really sure that they are considering all the risks. They’re a little bit nervous about it, but they’re getting pressure from the top to leverage and get the returns on this new technology.
So we see that kind of delay last year, and then this year, they’re being forced by upper management and the board to catch up. So we do see them scrambling to figure out how to put in risk in there.
And yeah, and I don’t want there’s an AI discussion, I think, that’s probably is gonna happen anyway. But when you consider AI risk, there are really two separate risk buckets.
There the security vulnerabilities being created because there are new vectors, new ways to get to, say, data. But then there’s another type of risk, which is really the public benefit trust balance.
Right? Should we be putting this in place? And what is the societal impacts of that versus the cost, you know, the benefits versus the downside? And so there are really two separate risk calculations that I think are gonna happen, and folks are implementing those right now. We do see that happening. So it’s my part on that.
Okay. Mark, I want you to add your comments.
Yeah. I would say, you know, cloud isn’t new, but it certainly is.
From my experience, I think I represent a lot of typical IT people. I was in infrastructure, and now jumped over to security about twelve years ago.
But cloud requires a whole new set of skills. You know, your infrastructure people that, you know, like I grew up setting up well, I don’t even wanna name the earlier operating systems earlier than Windows ninety-five.
Most people don’t even the the small disc that you plug computers in. Or, you know, plug in and build operating systems and stuff like that. But, anyway, building email servers, now everything’s in the cloud. So, like, Landmark is moving to be cloud-forward.
And, you know, we’re looking at retraining all of our IT people, you know, including my team on information security, it’s, you know, how do you define cloud posture management? How do you do vulnerability scanning? It’s a totally different set of tools and skills, and commitment and community, and Things that we share and report up to the board of directors and other IT leaders.
You know, you’ve got containers and stuff like that. So it’s really so I approach this in retraining some staff, in getting certs in Azure as an example, and bringing in Cloud security posture management tools.
So that’s been our approach, but I think cloud is definitely one of the one of the other other technologies?
Sisera, I ask you to add your.
Yeah. Thank you, Qazi. The question being how are we recalibrating our risk models with the ever-changing security platforms or threats?
For me and for us at Loop Capital, because we’re a financial trading firm, investment banking. But I think the first thing it starts with our strategic side of the house.
We’re always looking at specifically what are the mission-critical data systems that actually interface our business side, that’s generating revenue. And we’re constantly looking at that first, our strategic side and when we look at the banking side with AI, obviously, you know, market-making tools, order management systems is what we’re looking at. We’re always looking at how we can make sure that we’re mitigating against the revenue-generating side of the house.
So that’s the first lens that we look at, the strategic side. Then the second side is the trust.
Trust is the regulatory obligation. And I know we’ll talk a little bit more on the regulatory side of the house, but we’re always looking at the ever-changing, rapidly changing.
Rules of the road, making sure we’re staying compliant.
And so thank you. And so we’re always looking to make sure that our risk models interface our compliance. And then lastly, I would just say operational, which is the stability side of the house.
You know, there’s a lot more scalability and there’s a lot more computing power that’s needed. There’s a lot more, you know, tools that are out there. So in our risk models, we’re looking at how can we make sure we’re meeting the demand and we’re staying operational excellence as we are looking at our risk models. So, think those are the three things we look at.
Yes. I’ll ask you a follow-up on that. How do you balance reactive measures with the proactive threat, and, I mean, it’s like anticipation and is there an optimal mix between readiness and resilience?
Yeah, I think that’s a great question, Cassie. Managing, you know, proactive versus reactive measures within our organization.
You know, right now, we pretty much accept the fact that preventative is really a pie in the sky. You’re not gonna be able to prevent attacks, but you wanna be able to mitigate.
Wanna be able to predict to mitigate.
And, when I look at those balances, I think about budgets. Where are you gonna put your money? You’re gonna put your money more on preventative means or readiness means? And I think it’s gotta be a balance. Okay? It’s gotta be a fifty-fifty balance. That’s how that’s the way I look at it.
And the way we focus on our specific industry is on how it impacts our liquidity assets? So, as a firm, it’s all about liquidity. How many reserves do we have in order for us to service our clients as an investment bank? So our risk models are always tied into our business risks. And so when we’re talking about spending money, whether it’s preventative or readiness, we want to make sure that the end justifies the means. That we’re always tying in our models, business impact model, total cost of ownership, which I talked about earlier.
But again, it’s a balance, and it’s educating our, you know, C-suite to make sure that they understand it’s you can spend all your money upfront, but it’s impossible. Okay? It’s almost impossible to prevent. You have to be able to have monitoring. You have to have a quick response, you have to be able to have in place the skill set, you know, a lot of the preventative measures are on personnel, making sure you have the right personnel, the right skill set for the job. But we’ll unfold that more as we talk. But thank you.
Yeah.
Let’s switch to communicating. with just wanna add one thing to that. So if I put my governance brain on go ahead.
I’m using my Risk assessment to prioritize how much I have to do for readiness versus know, controls. So I just wanna put that out there that, you know, I think you should be doing a risk assessment to prioritize that to get the right balance.
Alright. Thank you. Okay. So, yeah, let’s switch to communicating with the boards, and we have seen cyber risk is now positioned as a board-level Concern. And let’s see, this is Rajas.
How do you effectively communicate with know, security priorities, and threats with the non-technical executives, how do you do that?
Yeah. It sounds like these guys aren’t gonna get any questions, so I’m gonna have to answer them all, it looks like. No.
Just kidding. We’re pretty balanced here.
So, the way we wanna communicate so first of all, I wear dual hats, and the way the reason why I wear dual hats is because we’re a smaller firm. We’re about three hundred employees, but we do compete with some of the larger firms in our bracket. We’re more specialized, kind of think like a law firm. We do two we have two h q’s, one in Chicago, one in New York, and we do a lot of trading. We have about fifty percent of our business made from the trading.
And so, when I’m talking to my board, which is really a private board because it’s the partners, and then other stakeholders. It’s really all about strategic alignment. Okay? So, each of our business units have specific strategic goals that they’re approaching for the following year.
I make sure that all my KPIs, KRIs are aligned to each specific business unit’s goals, strategic goals. So that’s the first thing. And then the second thing, you know, obviously being in a regulated firm, we are SEC, FINRA, SIFMA. These are the bodies that regulate us.
I’m always making sure that I’m aligning my communication to assure my partners that we are adhering to the regulatory statutes that are within our firm. And then lastly, I’ll leave with this. It’s just operational excellence. You know, we don’t get a sit at seat at the table unless we are up to date. And so making sure that availability, making sure we have tested DCP plans, Doctor plans that tie in to our operational excellence, is the way we communicate, and it’s gotta be language that they understand.
Not tactical language.
So that’s what I would say.
I think, Paul, you must be interacting with a lot of them in government with your government experience.
We do.
And that’s a great angle because in the public sector, obviously, the motivations of your leadership, both elected and appointed, have some differences.
Elected officials are worried about the instantiation of risk that would cause disruptions to services that the community uses or disruptions that could have I don’t know if I political risk, certainly reputational risk that are gonna cause us to get a thousand letters or folks to come to town halls in large numbers and be angry or the legislature to haul people before committees and, demand answers. And, with that said, what we found and what I found in my career to be most effective for explaining risk to these folks is first to ground any discussion of cyber risks in terms of impacts that have a main street that main street visibility that essentially would devolve to something that people expect their government to do.
And that lends itself in turn to being able to tell a plausible story to say, you know, pick an example.
You know, talk about an incident that could occur based on a real gap in one’s control framework or unaddressed risk or need, and take that all the way through to how it causes, you know, some sort of realistic impact on everyday people.
And to make sure that they understand that chain of inference all the way through. Risk presentations kinda turn into a series of stories that you tell.
They need to be backed up by solid facts and analysis.
But ultimately, human beings respond to narratives and stories and emotional the ability to have an emotional stake in the story you’re telling, ultimately. And it can be a tough ask; it’s something to strive for.
Alright. Mark, what frameworks are storytelling techniques work well with the board?
Yeah. Yeah.
So do how many people, and let’s make this a little interactive. How many people report to a security committee or board of directors?
K. Smaller. Yep. It’s tough. I mean, when you get to that point, I started doing this.
I was an officer in the army, so I briefed generals and colonels and stuff like that.
But this job was the first one I had to brief a subset of the board of directors. And I just wanna acknowledge that it’s tough. It’s not easy. But like it was talked about earlier in this conference here, meet people where they’re at. It’s no different than meeting someone at a conference here. Meet them where they’re at.
One of the talks today gave some good information about meeting with different departments, meeting with these executives, see what their hot buttons are, and seeing what their understanding is of cybersecurity. So, some practical examples of what worked for me. So, again, I report to a security and risk committee subset of the board of directors, fairly technical people. It’s an array.
But one example is, like, our fish click rate. As simple as that. You know, industry is, like, six percent click rate.
Ours is two point five. So that makes sense to them, and I explained what the click rate means.
Another one that I introduced over the last year is the efficacy of our security tools.
I just I thought up I was just putting myself in their position, you know, what would I be looking for if I was on the board of directors? There’s a big difference between having your endpoint security controls at sixty percent or ninety-eight percent. You’re never gonna meet a hundred because there are systems that are in drawers, you know, or are being issued, stuff like that. So those are a couple of examples.
And then, you know, NIST CSF, cybersecurity framework, is talked about today.
That’s a maturity rating. I won’t go deep into that because it was covered pretty well today, but those are rankings that they understand. Are we risk-informed, or are we repeatable, adaptive, stuff like that. So look at frameworks to help you explain the story. That helps.
Yeah. Can I answer that?
I have a quick story because I think this is relevant here.
So I’m frequently invited to boards to meet and talk about cybersecurity for our clients. And this is probably ten years ago, so maybe some of you have heard this story.
But I come in and there’s a big stack of paper in front of all the board members, including myself. And, you know, at some point during the meeting, they excused the CSO and the CIO. They had them leave. They said, You stay, and they meant me. And then the chairman of the board picked up this, you know, twenty pounds of paper.
Lifted up and dropped it very hard on the desk. He goes, What is this? And I panned through it, and I said, Well, it looks like these are all pen test reports. And he said, What does that mean to us? And I go, absolutely nothing.
And and and that’s, you know, the first experience I had is like, oh my god.
CSOs are providing technical reports to the board, I said, You should never see this. Right? And so I said, you need a risk assessment, and you need something that’s in terms of business. And if any of you attended the nine-thirty this morning, Chris Cronin talked about these nontechnical executives getting these technical reports and how that’s just hurting the cause.
So anyway, you gotta put in their terms, the mission, the objectives, the obligations, the strategic alignment, as Cicero mentioned. That’s all they understand. They don’t the technical stuff like, oh, we have a thousand vulnerabilities today. Know, yesterday, we had fifteen hundred.
That doesn’t mean anything to them. Are we gonna be able to operate? Are regulatory issues at stake here? Are we gonna get sued?
What’s gonna happen? We gotta put it in terms they understand. So I just want to put that out, yeah.
Thank you. Thank you.
Yeah. I’m going to guess, like, not on this one, but we are switching to this AI, and there’s an automation of cybersecurity and AI has become like an opportunity and risk amplifier too. So I think I’ll ask Paul on this one. AI is transforming both the defense and attack.
And how are you integrating AI responsibly into your cybersecurity programs?
And without really introducing new risks.
Well, that’s kind of a leading question, Cassie.
Because I was thinking of this in the earlier discussion.
We’re doing quite a lot to respond to the evolving threat framework, but a lot of what we’re doing is essentially trying to make our blocking and tackling and hygiene more efficient by reducing our attack surface, by working to make our procedures more efficient in terms of our protection, detection, and response.
And one thing that we are not doing is rushing down any rabbit holes with respect to AI. You asked a great question. It’s true that AI is transforming both defense and attack.
But how is it doing that? Something we’ve noticed. For example, people talk about AI threats all the time.
We can bucket those in several ways.
Tremendous enhancement to social engineering capability.
Using deepfakes, you know, in large scale, you know, campaigns at scale for social engineering and other things.
And certainly is in the process of transforming the development and deployment of malware, again, at scale.
But what do you do about that?
You have to respond at scale. AI integrated into various tools can certainly help with that. But at the end of the day, our perception is you’re generally doing a lot of the same things that you would be doing anyway. That’s particularly true. The predominant risk I see right now is the social engineering risk that is instantiating.
And you don’t use an AI, you know, to well, okay. You do. But AI is gonna be a limited utility in educating your users to be cautious and report incidents promptly, and, you know, to secure the human, which is your front line of defense.
We could go into much more detail about that. How we are responding, therefore, is that we are looking at every instance where we see AI purportedly making a, transformative impact in either attack or defense, and then ask ourselves if that is really true and if that is gonna change our approach to anything.
What that’s generally translating into is a pretty cautious approach.
We have a couple of proof of concepts that we’re doing where we believe that AI will truly be transformative in defense.
But for most of this other stuff, we’re moving fairly slowly, ensuring that we continue to appropriately calibrate the various tools that we use, those that implement AI.
Because we see a significant risk in terms of unexpected impacts. AIs make a lot of mistakes, guys.
And you don’t want them making mistakes when they are automating your response to malware infections. So we calibrate very carefully the tools that we do have, ensure that we’re working with our vendors to understand where AI is implemented and what its impacts really are, and continue to tighten our process controls around the use of those tools.
Okay. Terry, can you please discuss a few practical issues with AI usage?
Okay. I think of AI security probably in three buckets. There’s the use of things like ChatGPT and other sorts of tools like that internally.
Organizations may build their own LLMs and agentic workflows to try to replace things.
So there’s the building of their own, you know, AI internally. But there’s another one that’s sort of we understand it, but it’s sort of hidden. It’s the use our third parties so, Paul, even though you not may not be implementing AI, all those software providers and partners are, and that’s really the danger. The embedded AI and all these tools that we use. How do we check and know how what the impact is on us?
And I think that getting a handle on that is really important. So discovering where the use of AI is, both internally and with your partners, is probably number one. Getting the governance framework in place. You gotta get your policies in place.
How are you gonna regulate the use of AI? Right?
But there are now tools available. They’re very new, but they’re available to actually do the discovery, do policy enforcement, and do red team testing on your AI implementations.
Test the guardrails and even supplement the guardrails where they’re not sufficient. So these things now exist. And so I think it’s important that organizations know that you know, we have a responsibility to seek these things out and put them in place.
They’re no longer like, hey, it doesn’t exist.
They’re here. So hopefully that helps. Yeah.
So I’ll move on to Cicero. How do you assess and manage the security risks of AI systems deployed by your organization or vendors?
Yeah.
I think I definitely agree with my colleagues here.
AI is the next wave. Obviously, it’s been around for a while. It’s just renamed, but it obviously, the multi force multiplier is there.
When it comes to ourselves in terms of risk mitigation of AI, internal and vendor-related. The first thing is education.
We’ve come in this campaign of educating, first of all, our internal staff who make decisions on the type of technologies that we’re gonna be bringing, educating our C-suite, we’ve had, you know, the big blocks, Ernest Young’s, Pricewaterhouse, Deloitte, do a case study for our senior execs. We brought them in and they are just forward thinking and then work backwards, kind of a reverse engineer of AI. What the end state would look like when you’re generating revenue all the way to the, you know, use of, you know, AI web or chat engines, Copilot, and the like. So the education piece, and then we opened it up to the general employee level set, because there’s a lot of confusion on personal use of AI.
CHATGPT, Copilot, and then corporate use of AI. It’s a big difference because when you’re in corporate, now you’re inputting corporate, you know, intellectual property into these systems. You have fiduciary responsibilities to your customers. If you don’t know where the attribution, where the actual, you know, where the answers are, or the LLM or the data lake, you know, obviously, you can give some erroneous information, like it was mentioned.
So education was the first wave. Then the second wave is really classifying our data. There’s private, there’s public, and then there’s paid data, at least in my world, where you’re paying for real time feeds, for example, the Bloombergs, and and and you’re making decisions on that on that market. And so for internal risks, we had to put in place those three levels of risk.
And lastly, you talked about vendors, the TPRM, third party risk management is, you know, we’ve had to revamp our questionnaires, right, for our third party vendors. And we had to put in place, you know, like you mentioned, it’s not just about whether we’re having agentic AI or generative AI, but it’s our vendors that are actually already have that in place. And we have to have full disclosure. We’re asking the questions.
We’re making sure that especially mission-critical data is scrutinized in that area.
So these are some areas. Alright.
Okay. So, Terry, are you implementing any AI-specific governance policies for your clients are in the okay.
Yeah. The governance policies we’re developing policies for them. And one of the big requests we’re getting now is really on the regulatory findings, especially on global companies. You have things like the EU AI Act, which is very specific. Like, you cannot use AI for very specific things, like, for instance, checking out a person’s mood.
You’re not allowed to actually use AI for things like that or behaviors or you know, anything human-related like that to assess a person, it’s really off limits. And then there are the high-risk items, you know, might be for manufacturing equipment or putting together pharmaceuticals and things like that, and then others. So I think there’s a framework or a regulatory navigation that’s very difficult for clients to figure out.
And intellectual property-wise, I mean, look what happened to Anthropic. They had to pay one point five billion. They built their own LLM off of copyright material, and they had to they got sued for that for or the regulators came after them. They could have paid one hundred and fifty thousand dollars per copyright material. They negotiated it down to three thousand dollars it wasn’t the use of the fair use of the material for the derivative works out of AI that was in question. It was the fact that they had exact copies of all the copyright material stored on their systems. So I think as you’ve some, I do have some clients building their own LLMs, so you have to be very careful of where that data is sourced and how you’re getting it to build your models.
What was the full question?
The same question? It’s our governance policies.
Governance. Yes. Okay. I think that’s the number one question that organizations are trying to put in place, is the governance apparatus and the policies.
And particularly, how does that affect their risk assessments? And again, I think I mentioned there’s it’s impacting it double.
What’s the new vectors that are introduced that put my data at risk? But then what’s the public benefit relationship there for, actually going forward and putting that in place? So measuring both of those and having those in your risk governance program is really important.
That’s what organizations are doing right now that are considering this. They’re putting those things in place.
Right. Okay. Thank you.
Guess what? Could I have an opinion?
Go ahead.
Oh, great. So, yeah, I think I’m I’m about to restate everything Jerry just said in a different way.
But Repetition is another ball team.
Fair.
Our governance process presently boils down to about half a dozen key principles. First, do we know where AI is being used in the organization, for what, and who is using it? Secondly, when people are putting input into an AI or any input is going into an AI, is it kept confidential? Do we have any reasonable assurance of that?
Third, the AI will produce output.
Do we look at that output? Do we have a way of assuring ourselves that it is accurate and complete, and correct?
Fourth, do we trust these services where they’re doing something critical, whether we are operating them or whether they are embedded, more likely, in a third party’s products, with or without our knowledge?
Do we trust that these are going to be there when we need them?
And then, finally, are we on the right side of any kind of legal and reputational impact that might devolve from our making use of AI?
Those are our governance principles, and that’s what we try to do every day in working with this stuff.
Okay. So let us switch to our supply chain-related issues and supply chain vulnerabilities continue like, you know, to expose organizations globally.
And with that, let’s ask Mark, see if you can answer this. With the rise of software supply chain attacks, so what controls our strategies are most effective for particularly in our securing third-party programs?
Yeah. Third-party risk really hasn’t doesn’t get talked about enough.
Recently, you know, there’s been a lot more activity with Kaseya a few years ago.
The really notable one, was it was it Yahoo? Where they oh, excuse me. Which one was it through?
The HVAC systems.
Target. Yeah. Good one. Way back. But now there’s been so much activity. I don’t know if Fortinet is here, but we’re we don’t use Fortinet.
But I say to my boss, gosh, there are vulnerabilities coming out for Fortinet all the time. That’s a third party. So I’ll summarize my thinking of third parties. So one, like, for a banking institution like us we have other organizations, third parties that hold our data essentially.
So mortgage applications, for example.
That’s an outsourced third-party system, very common for financial institutions.
You know, what happens if they have a breach? We haven’t. We don’t. There’s nothing going on at the moment.
You know, nothing. But what if they did? That’s what we have to be concerned about as we talk about incident response planning and stuff like that. So if they have a breach, I mean, there would be some context there.
We wouldn’t be likely, we wouldn’t be the only credit union or bank where their data was breached. But, essentially, we would have to communicate to our members that their data was exposed. So from someone else that holds your data another one is tools that we have in our environment. Again, I’ll go back to firewalls.
You know, there are vulnerabilities in the code, like any other application, in a firewall. So so there’s responsibility by the vendor to keep these systems up to date, and there’s also a responsibility on the client to keep these systems up to date and patched and stuff like that. So then there would be, you know, there’s many there are more.
But then we also, not unlike other organizations, invite other companies to implement systems for us.
So we’re inviting in third parties that have sometimes administrative-level access.
You know, So that’s another example. Now, let’s just talk about three controls.
So third-party partners, of course, third-party due diligence.
There’s the financial side of it, but also what are their security controls? What are their certifications? Are they PCI DSS certified? Do they have other attestations on how their other controls are in place, and have they been reviewed lately.
Then integrations on how third parties are integrated in our environment, and that could be people or systems.
So it’s really important to review those integrations. In my organization, I have an architect review them, and then they come over to my department. So we’ll review before they get implemented and then right after. Then there’s controls testing and stuff like that. Lastly, a lot of people forget to do access reviews.
It’s super important and on so many levels to do access reviews, at least on an annual basis.
And then on make sure that enterprise risk management or someone in IT is in distro for critical outages and updates. So in other words, if there is a vulnerability in a system that we utilize, are those notifications getting to us, you know, on a zero day or something like that?
So those are some ways to mitigate risk there. It’s things to consider. Okay. Terry, can you add some vendor assessments, give some examples?
Well, was going to say on the software supply chain, right, that that’s where we’ve seen lately all the software supply chains being hacked where they’re actually getting to the developers, the GitHub, the SalesLoft, SolarWinds, all those, they got access into the systems, into the code, and the code got dispersed.
What was the n what? NPM. It was billions of downloads now they are infected with malware. So I think there’s a there’s whole change in what we have to have with trusting our SaaS providers and our software providers.
We can no longer trust the source that it’s gonna be good code. So we have to enforce zero trust principles and validate and authenticate the signatures and tokens all the way through the cycle. So what I have a lot of clients doing is they don’t just deploy automatically everywhere. They first put it into test, and they check it out there before rolling it out or dev tester dev before rolling it out publicly.
So there’s probably some strategies there that people a lot smarter in application development can help out with, but we no longer can trust the sources of the code. They’re getting right to the developers. They’re conning the developers of putting the code right in there. And so you can have an approved list of plug-ins and extensions, like say you know, like SalesLoft did.
It didn’t matter because it was coming from an approved plug-in. So then we got to think about that.
Okay. How much transparency do you require from your vendors? Like do you verify that?
Well, I’m not the right guy to ask that question to. I will tell you that.
But I mean, I can do third-party assessments, but I think we require transparency, I’m the wrong guy to ask.
You wanna talk? You’re talking about transparency. How much transparency do you require from your third parties?
Anyone? Okay.
I’ll go real quick. We’re fighting over the mic here.
We’re so enthusiastic about your moderation.
I will tell you that this is probably the hardest problem that I am struggling with right now. And how do you actually do this assessment in the environment that we are currently in given the elevated and increasing supply chain risk we all face?
Honestly, we focus on, we do our best, ask for all the same reports and due diligence, and we can get reasonably from our partners. But ultimately, we are focusing on the ability to enforce our contracts to write those such they are favorable and that they will have real teeth should we find that a vendor has not played nice or done their, you know, their own homework.
And honestly, I’m you know, I would be interested in others’ views.
But right now, that’s what we see as the most efficient way to do this.
Yeah. I would say, you know, we take a step back and follow a principle. And the model that we use, and I mentioned a couple of times in my answers, it’s, STS.
It’s for strategic security, and then t for trust, and then the last s is stability.
The first portion of that is you can’t obviously mitigate risks for everything in your organization.
So you have to prioritize, and so we prioritize by starting with what are the mission-critical systems, where is the mission-critical data residing?
That’s the first risk assessment that we need to take. We need to go through that.
It’s relationships. It’s relationships with the business units that the link starts off with.
Making because they are they are in the interface with the business units. For me, for example, Bloomberg is a big, big one. And sit on the board with the Bloomberg team. We quarterly meet with Bloomberg because that’s the order management system that we use, understanding all the risks that they have internally, how it interfaces with us, and then what we have internally in terms of Bloomberg support. So that’s the strategic side. Then the second relationship is with our compliance, which is our regulatory side of the house.
They run the TPRM compliance.
They have the systems in place. We partner up with them, making sure that it’s not just a questionnaire checkbox, but it actually makes sense. There’s security aligned with the questionnaires that we’re asking our third-party vendors. We’re in those meetings.
We’re actually visiting some of our critical partners in their space, going to, you know, ask the questions just like they do us when when we’re servicing business with them. So it’s that relationship with our compliance side for strategic compliance. Then lastly, it’s the operational side and that’s where, you know, your third-party vendors that actually help you stay, keep your lights on. And again, having that bucket of strategic trust and stability and really focusing on critical data, critical mission systems as your front line of your TPRM helps you mitigate the critical side of the house.
And then you do the latter part. So that’s what I would say.
Okay. So let us switch to this regulatory and compliance landscape.
Do you want to say anything before I just wanted to follow-up on the third-party piece.
You know, it’s part science and it’s it’s part art. What it shouldn’t be is a checklist.
Alright? If you’re treating it like a checklist, I think you’re gonna have a lot of high risk. Okay?
You’re here.
Having done a lot of third-party assessments for my clients and vendors, I can tell you that, unfortunately, the vendors and supply chains they lie.
Okay?
And so you it there, the art of this is understanding the kind of follow-up questions that you need to ask to really know whether they have that process in place and what’s really going on.
Also, coupled with some stealthy tools that you can run unbeknownst to them so that when you’re asking those questions, you’re already coming in armed, knowing the answer to a lot of things, and you can kinda tell whether they’re being truthful with you or not. So it shouldn’t be treated like a checkbox exercise, I guess, is what I’m saying. And unfortunately, a lot of the people performing the third-party risk assessments are literally doing a checkbox. I just need the answer to this box. And they’re not really thinking about it from a risk perspective. And I would challenge everyone, if you are a part of third-party risk, it’s gotta be thought about with impacts in real risk. Okay.
All right. So, regulations are catching up fast from DORA to these SEC disclosure rules.
And Cesaro, with your finance background, let me ask you, how are you? I mean, it’s like evolving global cybersecurity regulations. Yes. Impacting your approach to compliance and instance report. Absolutely.
I think we got one minute. Is that what he? No.
No. No. You have time. You we have a virtual We have a virtual. When you guys. Just talk about Put it up.
Okay. Okay. So there’s a virtual question up there behind you, Cassie. But as far as for us, we are a global trading firm. We do trade in all markets. So, Part of the Okay. Outlook is looking, first of all, at statutory rules.
And what I mean by this is that not everything is legally binding. Right? We have what’s best practices. We have what is, you know, international law, which could be legally binding, but then you have statutory rules that you absolutely need to abide by. And and and as IT professionals, we need to understand that even though it’s legal language, because then we can advise our firm based on certain regulatory obligations, it’s not just a good to have, it’s a must-have. Because if you don’t have it, somebody can go to jail. There are fines associated with it.
There’s there’s legally penalties around. So it’s weighing that and for us, for example, recently, you know you know, you have the SEC, which talked about the disclosure act that that that we need to follow. If there’s been an incident, we need to disclose, especially to our clients or customers. So in partnership with compliance, in partnership with our legal officer, and then our business units, it’s always checking to see what is what the laws are that have come out that we are legally bound to because not everything, you know, you’re legally bound.
Doesn’t mean you didn’t don’t need to follow it, but that needs to be a priority. And then the second piece of that, when it comes to regulatory compliance, is looking out to see what tools already have these regulatory laws or rules built into so you don’t have to reinvent the wheel. It’s hard to have real time update to what’s being deployed in the legal framework. So, example, just a quick example, one of the vendors that I was talking to this afternoon at the booth there, which is a great tool.
They’re identity access management (IAM) tool and they do reporting, they do real-time zero-day reporting on IDs within your organization. My question to them was, do you have, first of all, a vertical based on industry? For us, it would be financial. And then do you have legal regulatory rules around the financial piece?
And, can you report on what’s mandated with your tool? So, that would be ideal. So, you have, for example, for us, SEC, the legal disclosure, and then there are certain reports that you need to automatically have when there’s an audit. So it’ll be great if your tool already had that in place for whatever industry you’re in.
So these are the kind of challenges that I have with my vendors for them to also be updated with the regulatory rules that’s based on my industry, so whatever industry you’re in and you can automatically have that as part of your audits, your reporting, and so on and so forth.
Okay. Right. How much time do we have? Eight minutes. Eight minutes? Okay.
So You want you want me to take the correct answer That’s right.
Because I think this came up because of what I said. So the question that came in virtually was, talking about third-party attacks. Can we compel a third party to disclose its security posture and security audit results?
Well, that depends. Okay? So if you’re suing them and you’re doing discovery, yes, likely.
But if you’re doing just a third-party assessment, no, we probably can’t. But you can decide to do business with them. So I think it depends on the scenario.
Now, here’s what I typically do if I’m doing a third-party assessment and I wanna see their, say, results of a pen test or the last vulnerability test, I’m like, we will not show that or whatever. Then I’m like, fine, we’ll get on a Teams call and you pull it up, and I’m going to scroll through the results and see it firsthand that you actually know what’s on there. So they’ll typically agree to something like that as a compromise.
But no, you usually can’t compel them unless it’s unless you’re gonna pull the work, and then they decide they’re they’re gonna show it to you.
Okay.
Okay.
Alright. Yeah. I wanna briefly reiterate that I have been a regulator, a vendor, and also a purchaser.
You can compel it contractually.
And if you’re a regulated entity, that often is the fact that you are regulated is your very best friend. When writing that contract, because you can tell them this is an incontrovert you find a way to tell them this is an incontrovertible regulatory or audit requirement. Have to do it.
But Terry’s right. You get that one opportunity.
And certainly, in my experience, I’ve found that you can do your best to compel it, but you or what’s more likely, your CIO, had better have a plan b in case they refuse.
I do vendor due diligence. I’m fortunate that I’m empowered to reject vendors, and I do.
You’ve gotta be ready for your users to accept that. One thing I’m very careful to do as a CISO is to lean forward on finding alternatives when we do have to do that. Otherwise, ultimately, you’re not gonna be able to make your program stick internally.
Okay, Paul. So, Hybrid work and multi-cloud adoption are here to stay and as organizations continue embracing hybrid work and multi-cloud environments.
How are you managing identity, access, and data protection? Well, that’s a great question.
The last everything since the pandemic. You know, it’s just you know, we’ve all gone through an enormous amount of stress trying to maintain effective access controls and data protection in rapidly evolving environments. I can’t get into too many specific details, but I can certainly talk about the principles that govern that.
We, first and foremost, have found that being diligent about the basic tools that are available, particularly if you’re in a cloud-based environment, is actually pretty effective if you know how to use them.
And so what we focus on again, I keep coming back to this.
There’s no substitute for blocking and tackling.
Somebody, I forget who it was, mentioned the importance of understanding and reviewing access, of doing that due diligence so that everybody who has access to anything in your systems has appropriate access. We work fairly carefully to try to segment information based on true need to know and to segment our information spaces to the best extent that we can, so that anybody who compromises any single account is unlikely to get access to everything, which I know just made some of you probably sweat a little bit as I said that. I’m sure you all have fully segmented networks and server bases. Yeah, okay.
We make use of federated identity because we have the challenge in our agency of having to provide access to certain things to the people of Wisconsin.
And so, again, fortunately, the major providers of federated identity and the major identity service providers actually have fairly robust tool sets. You all know who they are.
But you need to take the time to make sure that you understand very thoroughly how those work. We also make use of multifactor authentication (MFA) to the greatest extent possible in the strongest possible implementations for the simple reason that that’s as close as you get to adversary repellent.
If there’s a single bullet, silver bullet in our industry, that is it for most threats.
So there are we’re not without challenges.
For your application space, it’s one thing to say, yeah. Well, we’re going to implement, you know, you know, very thorough, well-segmented cloud-based controls with MFA. It’s another thing to make sure that that works for all of your applications, and that can be expensive.
A related issue, if you are providing access to folks that you don’t necessarily know.
If all you have to do is secure infrastructures and the staff, then you’re a bit ahead of the game. For us, proving, understanding that some outsiders to our system are who they say they are is a key challenge, as are nonhuman accounts.
Understanding where service accounts are being used and who actually holds those service accounts. Those just make me crazy, personally, as a CISO. I don’t want to have them at all.
But we don’t always get to decide what we want.
So it’s a challenge.
I will say that newfangled stuff like cloud, actually, we’re excited about because it has given us best of breed tools in some cases to make use of this stuff, provided again that we understand how they work.
We haven’t seen, in my opinion, a big negative impact yet with respect to artificial intelligence-driven threats, but I don’t think that will stay future state for very long.
So, we’re certainly thinking about options in that regard. That’s a long answer.
Yeah. So we are pretty much on time. And if we give, everybody can speak for hours here. But let’s do the lightning round and see. Yeah. Three to five years. I mean, what emerging threats that paradigm shifts that you see, do you think will reshape cybersecurity, and how should leaders prepare for that?
Okay.
This is a lightning round. So that means what? Ten seconds. Alright. So I think we think quantum still weighs out.
You know, it was just thirty years, then it was twenty years, and then it was, like, fifteen years. So, financial services for sure are starting to prepare for those quantum-resistant algorithms now. So I think that’s going to be something over the next three to five years that will be put in place.
We’re gonna see AI battle it out. AI can be used by the good guys, and it can also be used by the bad guys; they’re certainly using it. What is really scary is it means you can be a really low-skilled person but have bad intentions and use these tools to create a lot of havoc. So I think that’s, you know, as was mentioned earlier, the script kiddies, well, now they’re loaded up with, you know, even more powerful tools that they can use. So to me, those are the two things that concern me going forward that we need to be prepared for in our governance and risk program.
Data protection and privacy, just a continuation.
You know, no matter where the data is, whether it’s accessed by AI, it’s important for organizations to inventory where their data is and look at the controls around it. Certainly, there’s, you know, the CIA triad of Confidentiality, integrity, and availability, all those factors. But data per just know where your data is. You know, that’s to me, it’s getting back to the basics, and it’s all about the data.
The federal government’s role in supporting key elements of our national cybersecurity infrastructure is diminishing. That’s a statement of fact.
At the same time, our adversaries, our nation-state adversaries, and major criminal adversaries continue to improve their techniques and tooling. And, this, in my mind, is a gap that always exists but is currently increasing.
I don’t know how to say this without sounding like an alarmist, but we’re all targets on this stage to some degree. Everyone is.
National security is ninety five percent of America’s critical infrastructure is, in fact, in private hands.
And it behooves us all to think about what will happen if critical infrastructure services may be disrupted and to plan accordingly for that sort of scenario.
And I would just add, I think from the automation standpoint is having systems with AI, the ability to make more intelligent decisions, where it’s not just a efficiency, where it’s actually making decisions with generative AI and genetic AI. Obviously, that’s a big one. The second piece I would say would be around, you know, the geopolitical.
You know, we are becoming more and more of a global in interactive, and impacting world.
So systems, I believe, will go to solving a lot of these problems where the human touch or the human intelligence within technology, where it’s more empathy-related related becomes more humanized. I think that’s where technology is going towards and then relevant specifically to the diverse population that the world holds.
So I think that’s where technology is going.
Okay, thank you for your insights, and please join me to thank our panelists, Terry, Mark, Paul, and Cicero.
And thank you for sharing your experience.
Thank you, audience.
Yeah. Thank you. Thank you.
View the Gallery.
