Transcript

Patrick Spencer (00:02.51)
Hey everyone, welcome back to another Kitecast episode. I’m your host for today’s show, Patrick Spencer. We’re very excited to have our host, or our host, our guest with us, Aaron McCray from CDW. Aaron, thanks for joining me today.

Aaron W McCray (00:16.456)
My pleasure, looking forward to it. Thank you for having me.

Patrick Spencer (00:19.298)
Well, Aaron, just to give our audience a little context presented at our SKO event last month and had a, fabulous presentation where he talked about leadership and engagement and owning the things that you have on your plate and career development. So for today’s podcasts, rather than talking about technology, which I’m sure we’ll dive a little bit into during today’s conversation, I thought we would talk about.

some of the concepts that flowed out of his presentation. So before we do so, want to give our audience a quick introduction of who Aaron is. He’s the field chief information security officer at CDW. He’s a CISO otherwise, he advises organizations on how to secure data, meet compliance mandates, and at the same time, still move the business forward. it’s the business is what counts at end of the day, but making sure you have the right compliance and security frameworks in place is.

critically important as our audience knows. He’s a retired U S Navy commander, 27 plus years of experience in information warfare and intelligence. He specializes in risk based security programs where he turns complex requirements into practical execution that reduces exposure, improves resilience. So anyone who’s spoken to him probably understands this. He can take the complex technology stuff and reduce it into something that actually impacts the business.

And there’s not that many folks in the marketplace who can do that today. Aaron is a regulatory and compliance subject matter expert. He’s recognized as a voice on next generation and AI governance. We may talk a little bit about that. And he holds a bunch of different, if you look at his LinkedIn profile, a bunch of different certifications from an education standpoint, he’s a doctoral candidate in business strategic leadership at Liberty university.

have to have him back when he finishes that degree to talk about his dissertation, I suspect. Has a master’s in management technology from Oakland City University, a bachelor’s degree in operational management from Wilberforce University. And I could go on and on. He has a very, very impressive background. Aaron, thanks for doing this with me. We’re looking forward to the conversation.

Aaron W McCray (02:32.149)
My pleasure and honestly, I think as you were going through my curriculum of vitality was flashing before my eyes all the years and dedication and hard work and commitment to get to this point. But it’s a passion, especially when we were talking about the SKO event, right? Leadership to me, developing.

that kind of resilient mindset for today’s leaders is more critical than any other time that we can talk about, especially when we start talking about things like AI. So glad to be here and hopefully your listeners will get some benefit out of the conversation today.

Patrick Spencer (03:06.286)
I guarantee they will for sure. you know, as I, as I mentioned, and you just mentioned, you talked a lot about leadership from a CISO perspective. you know, what does that look like today? cause the landscape looks a lot different due to AI, but other things as well today versus what it looked like, say three years ago.

Aaron W McCray (03:27.662)
That is a fantastic way to open this, right? This podcast. If you kind of take a look at where we’re at today, you can’t do so without looking back just a few years ago and understand that the evolution of the CISO’s role has been dramatic.

We came out of post-COVID where we had economic constraints and concerns and CISOs were really more about keeping the business running. So they had mandates that whether they came from the C-suite or the board were very, similar.

let’s consolidate what you have, right? Let’s be more effective and efficient with what you have, justify any new kind of spends, right? So was a tightening of the purse strings, if you will. And during that timeframe, CISOs really had to start focusing on the kind of tactical execution of the roadmaps, looking at their budget requests, scaling the perimeter defense, because what ended up happening post-COVID was our workforce became diverse.

We’re working from home. We have multiple different touch points for our systems, our data, our networks. So CISOs were really kind of thrust into this role that…

probably by design they didn’t want to do but had to do and they were focused more on the metrics of you know, what did we prevent? What did we block and became more of a kind of a security gateway if you will it says hey, we’re the office of no and I say that tongue-in-cheek but success was often defined by how do we address risky projects risky ventures and make sure that from a business perspective it was aligned to you know, what our risk posture was again not ideal for

Aaron W McCray (05:10.768)
for maturing your program or your posture and even for the CISO, very minimal skill sets required for that.

But that’s what the economy, the environment, the conditions, the market conditions drove us toward. Much, much different now, right? We’re well past that. 2025 was a great year of transition. We saw a new administration with new economic policies come in. We saw a lot of kind of support for businesses and for economic development and growth and a lot of hiring that was frozen previously. We started seeing those markets open up.

we started seeing a lot more, I’ll just put it this way, economic spending towards the security posture of organizations, but with a different kind of focus. You’re starting to see the CISO truly evolve into what they need to be, which is kind of that enterprise executive, right? So as the economy improved,

The focus turned towards things like business resiliency and recovery from both not just a cybersecurity perspective, but also from an operational perspective. CISOs truly became business leaders. And that’s a significant shift for the role of the CISO. Basically, was solving the problems together or collectively on how quickly can we get back up.

what was impacted, can we maintain a kind of that minimum viable business operational posture because at the end of the day we need to protect revenue streams. So business outcomes, financial risk quantification became things that CISOs naturally embrace and started to learn and to develop and grow within their organization.

Aaron W McCray (06:54.636)
call it the architecture of trust, if you will, right? So success would be defined as we move forward on a much more mature focus on business outcome, strategic growth and business enablement, and to allow the business, especially in the age of AI, to move swiftly but safely. And it’s not an easy thing to do. And you’re changing your focus as being more of a kind of a tool operator.

and a perimeter defense type of person kind of think technology focus to strategic risk executive. And your reporting might change too. So previously you might be aligned under a CIO or a CTO as a CISO. But as you make that shift and they start to see the value as a strategic risk executive, you might report to a committee on the C-suite or to the CEO himself. So that’s kind of the big shift or changes I’ve seen over the last

few years.

Patrick Spencer (07:53.91)
You write up an interesting point, the KPIs for a CISO probably change a little bit in terms of how they’re measured as the reporting structure changes and everything else. know, what do you see happening there? And if you’re a CISO out listening to this podcast, what are a couple of recommendations you might have for them in terms of how do they make that pivot?

Aaron W McCray (08:16.554)
So one of the things that I referenced is looking at the landscape that a CISO has to address with a more holistic lens and approach towards kind of enterprise risk, governance, compliance management. So think of it as a means of automating what we would normally do.

to let’s say for example, I might do a spot assessment, right? A quarterly kind of risk assessment. It might be over a portion of the business or it might be over like say Sarbanes-Oxley controls or you’ve got some other, know, PCI for example. Well, that’s fine. It gives you kind of that point in time kind of snapshot.

But how do you explain where we’re at on a continuous basis and whether that risk is actually relevant to the business, right? So it’s kind of the movement of the metrics that drove things like risk management frameworks, cybersecurity, maturity model indexes, CMMI. Those still kind of fall back to the same old traditional of, well, hey, Aaron, give us your best opinion based on your experience, your expertise, your knowledge of what we have in place.

utilizing a risk management framework is the control good and is the control effective or mature in this case. That’s fine. That’s not a bad approach and I’m not advocating that you shouldn’t start there but we need to mature beyond that and what I’m suggesting is is if you sat in front of your CFO and they asked you the question, Aaron why should I invest in this?

What do you have from real world metrics that points to this is the reason, this is the top priority for our organization versus these other four or five things that are as equally risky, right? We didn’t have that kind of foundational information to be able to present back to a CFO, CEO, COO and speak their language from a business perspective, right? So, hey, Aaron, this is our minimum viable business.

Aaron W McCray (10:22.35)
What are these risks? How does that impact a minimum viable business? How does it impact our bottom line? You know, the profitability of the company, the EBITDA, if you will, right? How does it contribute towards that? And those are all kind of business types, acumen and terms that CISOs typically didn’t engage with in the past, right? They focused in their swim lane. This is cybersecurity. This is the risk around cybersecurity. Here are the controls you need to have. We’ve moved beyond that. Let me give you a perfect example.

One of the things that we like to encourage.

And there’s for example, there’s more than just me as a field CISO at CDW There’s four others and we all come from kind of cut from the same cloth We’re all kind of focused on the same things together and one of the things that our focus is risk financial quantification That’s understanding the business acumen translating the cybersecurity kind of Program your methodology your approaches your roadmaps your your budget requirements

back into those terms that truly assess the financial risk to an organization. So how do you do that? That’s really the big question. One of the ways that we’re helping customers right now is taking their focus from just performing a risk assessment, let’s say based off of this CSF, the latest version, but also linking that up.

using, you know, whether it’s AI or the mathematical algorithms that sit behind a lot of this to look at real world claims data. So we’ve partnered, for example, with a global insurance provider who’s been addressing cybersecurity insurance claims for going on 20 years. If anyone has a wealth of data that you can parse, think about that as a large language model that you can just turn your research on, right? What was the root cause?

Aaron W McCray (12:17.686)
What were the leading indicators for that particular risk? What industry was it in? What types of systems were they using? What were some of the processes that were behind it? So you get to the root cause analysis, the forensics, if you will, that says this is what caused this. Here’s what the impact was. And a result, here’s what the financial impact was.

Isn’t that the beauty of what AI should be doing, especially if we’re talking about agentic AI? It looks at large language sets. It does that historical trending and analysis, provides the output you need to make intelligent informed decisions. So by combining all of this together, you have a much better kind of posture to go back to the C-suite and say, here are my top five.

And here are the risk indicators of why these are the top five. Here’s the impact to the business. Here’s the external financial risk exposure if one of these events were triggered. Now, let me take another step, another step forward. With a minimal investment, let’s say it’s we don’t have multi-factor authentication completely configured correctly and rolled out across all of our enterprises, right? We’re in multi-cloud, we’re hosting data centers, what have you. Doesn’t matter.

We’re going to go ahead and complete that exercise and for a $350,000 investment, I’m going to turn around and reduce my external threat and financial risk exposure by 35 million. Oh, okay. That’s the ratio that I can sit down with the CFO and walk him through. And he goes, yeah, I want that. Right. I get that return on my dollar. Now I know what I’m spending my money toward. I’m avoiding this particular incident.

This has become so effective that even for a few of our customers, they’ve gotten to a point they can self-assure, excuse me, self-insure, meaning we’ve transferred some of our risk to a cyber insurance security policy. We’re not guaranteed they’re gonna pay out on these things because a lot of times there’s those little clauses underneath, you must have X, Y, and Z, and it must be demonstrated they were in place at the time of whatever event, and if you can’t do that, they don’t pay.

Aaron W McCray (14:26.808)
But what if you can get to a point where you can say, we have residual external financial risk of $30 million, let’s just self-insure. Let’s just grab a policy for that. Right? No riders, no claims, none of these other nonsense on there, but we’re covered. But the kicker is, is you got to do that continuously. And that’s where the CISO 3.0 comes in. That’s the evolution from doing risk management framework and maturity model, attestations at a point in time to

building an environment that’s continuously assessing risk, exposure. And again, this is leveraging a lot of what we’re seeing in place today with, whether it’s agentic AI.

machine learning language or any of the other types of automation orchestration and response type of capabilities. This is where we’re seeing CISOs going. It’s not an easy thing to do and I’m sure we can talk about that, but I want to share a couple of things. This will change the scorecard a little bit for CISOs, right? As they’re looking at this, they’re going to kind of naturally start to assess their environment based on those operational cybersecurity resiliency factors we talked about.

Patrick Spencer (15:26.627)
Yeah.

Aaron W McCray (15:38.914)
right, that I referenced earlier.

So the benefits that you get from some of these advanced capabilities and technologies will help to drive, you know, what does our governance look like in a real time posture? Now this isn’t the committee that meets on a monthly basis and produces meeting minutes. This is more along the lines of my design controls are being measured on a daily basis. They’re effective. I’m governing any of the anomalies and outages and addressing them in near real time to bring us back up to whatever that state or posture

is. Now when I’m representing that metric back to the c-suite or the board is you’ve invested x number of dollars we maintained a 99.875 percent whatever consistency to our posture right and the deltas were closed within x number of days. Those are pretty doggone you know impressive statistics when you can say this has been measured continuously since the beginning of the year. What we’re 11 days into February

And if I was able to give you what our company’s posture was today, and it hasn’t changed since January 1st, would that matter? I think it would.

Patrick Spencer (16:46.752)
Yeah, absolutely. That you spun up a couple different questions for me. know, one, when you look at the way you’re doing this assessment, that’s dynamic and uses the large language model pool of data from the insurance companies. Are you finding that the risk per industry and so forth per region, even for that matter, aligns with what Poneman and IBM report annually or does your data look different?

Aaron W McCray (17:17.132)
I’m not sure that we’ve gone into that kind of depth of analysis to do what I would call a comparative analysis between what other industry recognize research, whether it’s IBM or Ponemon or there’s a lot out there, ISS and a few others. We’re more focused on from a business vertical, right? So that how does a CISO present back

For example, I’m in the manufacturing industry, for example. How do I present back? Here’s where we’re at in reducing our exposure of being a target and returning that investment back to the business to do things like automation, utilizing AI. Maybe we’ve got a business to consumer model and we want to be able to create agents that actually interact at a much more…

agile or quick fashion with our customers at a depth and breadth previously, you know, that we couldn’t do as a human that was able to analyze buying patterns and trends and everything else. Right. So it’s where do we compare in our industry against others? Why? Because that’s a competitive advantage and I can start to market that. And when we start to see that consumers recognize that we’re leading in those spaces, and I don’t mean my company, but the companies we help.

My goodness, that’s a great conversation to have. So whether the risk that that company is now facing compared to what others who aren’t doing what we’re suggesting faces, I’m not sure really matters anymore, right? Because we’ve created a delta between their posture and what the next generation type of security posture looks like.

Patrick Spencer (19:03.246)
interesting. Now, how IBM would do in Poland when they do their annual report, you mentioned there’s others that do something similar, maybe not as in depth, it’s annual. How frequent does one need to assess their risk posture? And are you seeing variance over a couple months and so forth? It probably depends on what they have deployed, what new threats exist in the marketplace, what new AI initiatives are being embraced.

even if it’s shadow AI, maybe it’s even a corporate AI initiative. I there’s a lot of different dynamics that go into.

Aaron W McCray (19:37.314)
There are tremendous amount of dynamics and we could probably speak for days and not even begin to scratch much more than 10 % of the surface of what those dynamics are. AI is a game changer. is absolutely, if I look back over my lifetime, right? And I’m 58 years old, so it’s not like I’ve been around for a super long time, but.

I do recognize when the PC was first rolled out. I understand the impact that had and then cellular technology, the internet. mean, we’ve seen some in my lifetime in the last 40 years, some incredible technological transformations that have transformed not only our personal lives, but the way businesses do business today, especially on a global scale.

But I can’t tell you that I’ve seen anything that resembles what we’re seeing with AI. And I don’t mean that to sound like hyperbole. I really don’t. But the rate at which AI is evolving and the capabilities that it brings to businesses, as well as the risk,

It’s unheard of. It’s unprecedented. don’t know how organizations cannot be so focused on that right now.

What is the risk to our organization or business? What are the benefits to our organization and business? And what are we doing about it? Do we have a strategy in place, right? Do we have an approach and methodology? Do we have policies? Do we have governance in place? mean, frameworks, standards, right? This is not like your traditional cybersecurity hygiene where I can just go say, implement these five things, we’re good to go. You hit on something that is absolutely…

Aaron W McCray (21:26.254)
relevant right now, which is Shadow AI. Think about what’s happened in the last week, if you’re familiar with Cloudbot. That has been in the headlines. the team that I work for has been heavily investigating and researching it, even to the point that we’ve had a couple of my peers get what we would call burner machines.

to actually download it and play with it to understand its reach. Right. mean, what you’re basically talking about is an autonomous AI personal assistant that has root level access to everything that you have access to. I mean, we’re talking accounts, passwords, access to other SaaS systems and applications that you might have access to banking applications, your tokens, and all

Patrick Spencer (21:56.385)
Interesting.

Aaron W McCray (22:21.872)
lot of different things and then kind of coalesces all of that and stores it you ready for this and playing text right on your local system so that it can actually go and do all the things that you start to tell it to do something and then it involves it thinks beyond that it interacts with other agents and it starts to go hey we can do more but nobody’s sitting or thinking ethically should we do more is it the right thing to do more where do we put the guardrails in well there are none with Cloudbot

Take that to Shadow AI. You’re an organization that hasn’t effectively put your policies in on what is allowed, what isn’t allowed, and then can enforce those, whether it’s at the browser level, doesn’t matter, right? So if your organization is allowing me to get to any type of browser and then go out and get to Gemini and get to all these other different types of AI agents, you’ve just opened up a back door.

essentially to exfiltration of your data, all those closely guarded held secrets, your secret sauce, if you will, is now exposed. Not to mention what threats do they bring back in, right? What injections are coming from those agents back into your networks and systems? So if you’re not focused on how do we identify it, how do we lock it down, how do we only…

allow or attribute certain types of AI agents, like say for example, you’re a Microsoft shop, so you were in Copilot and we put guardrails around that and that is the only thing that we’re allowed to use on corporate systems. That’s where you need to be.

These are the things that I’m sure CISOs are staying up awake at night going, do I have budget? Do I have the right teams in place? If I have the right teams, do they have the right training? And if they have the right training, do they have the right tools? And if not, how do I go present this business case and get those resources so that we can correctly and quickly go back to the business to say, look, there’s three things you can do from cybersecurity perspective with AI. We can protect what you’re already developing.

Aaron W McCray (24:31.958)
We can enhance and enable security features with AI, with our partners or what we develop. And then thirdly, we use cybersecurity to prevent AI attacks, essentially. And if we’re not focused on those three things as a CISO, we’re already behind the ball.

Patrick Spencer (24:48.706)
Hmm. Great suggestions. What, you know, this concept of data security posture management is, is very popular conversation, I think in the arena of cyber security. How do you tag your data, find your data, determine what’s a confidential, what’s not confidential. And then with the partnership we have with, with you and as well as with some of those DSPM vendors in the marketplace, KyWorks helps you do the enforcement.

Aaron W McCray (25:08.888)
sure.

Patrick Spencer (25:18.838)
Is that something that organizations, as they look at, I try to control my shadow AI and I even have corporate deployments of AI, but how do I ensure that there isn’t data being leaked into those AI, public AI, LLMs, or even the ones we have deployed or a book scenario like the one you described? Is that something that’s coming up?

more and more often with your customers in conversation, you know, that strategy around data protection end to end.

Aaron W McCray (25:50.324)
Absolutely. So, you know, I referenced earlier that CISO shift their focus to kind of…

operational resiliency, Not just cybersecurity resiliency. And if you think about data, data is probably the most critical asset that an organization has from a business perspective. It allows for forecasting, historical trending and analysis for building new product lines, new go-to-market strategies, enabling business growth, right? It’s what developers might use when they’re developing new applications and means of reaching consumers.

or their customer base. So yeah, data is critical, but it’s not a cybersecurity problem, at least not alone, not by itself. It absolutely is a business issue. And so when we start thinking about the operational perspective, data security posture management translates across all of your different environments. So if you’re thinking about, one, do I have a data inventory? Do I even know where my data lives?

Even if I did know where it lives, could I do anything about it? And do I know if it’s the actual trusted source of that data or a secondary, tertiary, quaternary extraction of the source data?

And did it violate a policy? mean, these are all questions the business needs to be asking and that starts with the governance piece first, right? So when you think about, I have governance from an AI perspective? Should be walking hand in hand with, do I have AI from a data perspective? Because the data is gonna feed the AI solutioning and models and your training and all of your, whatever it is you’re creating. So that governance question is basically the human element that needs to remain.

Aaron W McCray (27:39.054)
in the loop to ensure that the organization is doing what is contractually obligated, what is legally mandated, what regulatory requirements do they also have to align to. And then not to put these at the end because they’re less important because to me in this day and age,

they’re probably just as critically, if not more so important. Is it ethical to do? And is it moral to do? Just because we can do it doesn’t mean we should do it. And we need to take all of that into consideration. So when we do that, we start to put things in place like what is my classification policy, right?

What is my data use and handling policy? What is my data retention policy look like? And then data destruction. You get the idea. So I’ve got the governance pieces in place. Now I can effectively do data security posture management.

by using one of the platform sets out there. There’s no doubt these powerful tools are going to be able to gain access to these data fabrics, data lakes, data repositories, databases, whether in the cloud or on-prem. They’re going to be able to see the data. And based upon their own repositories, vast information, be able to classify that data correctly.

But classify it now towards what your policy demands, right? And now also look at it from a different perspective of just data loss prevention, but going, what are the risks to that data where it exists right now? Who’s accessing the data?

Aaron W McCray (29:12.372)
Now I start asking the questions, why are they accessing the data? Should they access the data? Does that align to my policies on because it’s part of their job, it’s need to know, right? You might discover I have a very secure data warehouse with all the different elements that I need that store certain types of data. But when that data comes out for business analysts and other types of business processing, those controls don’t go with it.

And now I’ve exposed the risk outside of my structured data to my unstructured data. These are things that DSP will help you understand. They’ll look at behavior analysis, environment analysis for that system. Is this anomalous activity surrounding this data? This is where we’re going. But all of that has to feed back into the governance that says, this is what is normal. This is what our baseline should look like.

These are the things that we’re hearing customers ask us, how do we do this? How do we get there? We’re somewhere on this data journey. Now AI is accelerating that journey. And we need some assistance to figure out how do we make these two streamlines come together, Or swim lanes, excuse me, come together and be more effective and efficient across all of it. That’s where CISOs now become business leaders. Because again, this is a business problem. We’re driving the business forward.

We’re doing so, think of it this way. I could go buy a Ferrari or I could pay extra money on that Ferrari to have all of the enhanced safety packages on that vehicle.

Which one do you want to trust, you know, have trust in when you’re going 150 miles an hour down a two-lane road, right? I want the one with the enhanced safety features to know that I can move at speed but do so safely. That’s the role of the CISO right now is to enable these types of conversations to take place to understand how do I transform my security program to become like that enhanced safety feature that allows the business to move faster and do so safely.

Patrick Spencer (31:18.127)
Now I know this, our audience, if they know you, they realize this as well from your presentation. You’re a philosophical guy. And you actually brought this up already. I was about to ask you this question and you alluded to it. Ethics. When it comes to the deployment of AI and particularly the use of AI and data, right? Cause you’re talking about sometimes private health information. You’re talking about PII data.

Aaron W McCray (31:25.868)
Yeah, yeah, I am.

Aaron W McCray (31:44.718)
.

Patrick Spencer (31:46.382)
Uh, know, employee records, you know, from a company standpoint, you’re talking about financial records, M and a plans and activities. Not much has talked about this in the marketplace, unfortunately today, but there’s needs to be a lot more in my opinion, the ethics of how you use AI and particularly how it relates to data. What’s your perspective there?

Aaron W McCray (32:09.13)
It always comes down to me to make sure that we have the right human in the loop or humans in the loop.

Look, AI is still at its core, right? It’s engineering, it’s programming, it’s development of software capabilities, tools and functions. It’s not an actual brain. Now it mimics, right? But it doesn’t necessarily, unless it’s been trained to do so, stop and ask those ethical, moral questions. It may not even…

you know, may not even have been trained to look at the legal obligations that organizations have. This is where the human has to come in and look at say, ethically, what types of data

can we use and also should we use. Again, I mentioned it early, just because we can doesn’t mean we should. So I might live in a state that doesn’t have a state privacy mandate in place, unlike California, right? That has very strict requirements on what types of data you can use, how it should be used, how you need to notify the consumer of how that data is going to be used, give them an option to opt out, right? So you and get rid of that data, cleanse it, right?

Just because that’s not in your state doesn’t mean you shouldn’t look at that and go, we need to adopt something very similar internally as an organization. Why? Because these are the guardrails, those ethical guardrails we should have in place that will help guide us in our decision making. And furthermore, when we start to put our policy out there for our end users, these are the same guardrails we want them to be using, right? And to be thinking about. So that’s kind of where I start.

Aaron W McCray (33:52.512)
Look, there’s not a universal standard out there yet that says, you know, here are the five guiding principles of ethical AI use and data management. They’re not there.

But it doesn’t mean that we can’t have intelligent conversations based off of, know, look, we know what we should be doing. Let’s build it out for our own organization. We actually have practices, delivery practices around just that, where we go in and we meet with business executives because that’s where it starts and that’s where it stops, right? You tell, look, and I’m not pointing fingers, but if you push this decision down to the lowest level possible,

you can expect some serious consequences are going to arise because they might be focused on enablement, profit, revenue driving, customer satisfaction, and the ethical piece might be sitting on the back end. I’ll give you an example, and don’t want to call out any major organizations. These happened.

years ago, and I’m talking four or five, while AI was still kind of really getting its wheels. One organization was going to use Agents.AI to parse through resumes for hiring. And so they did all of its training, used its data sets, and what they discovered is they were getting tens of thousands, because this is a global organization, tens of thousands of resumes over a period of time, is that they were effectively

introducing bias into the process and discriminating against qualified female candidates. A human in the loop looking at this, doing the proper testing and evaluation would have caught that, as opposed to, you know, hey, it doesn’t.

Patrick Spencer (35:25.166)
Hmm.

Patrick Spencer (35:31.416)
Yeah. AI doesn’t care. It doesn’t have an ethical guardrail.

Aaron W McCray (35:36.46)
Right? This is what you told me to go look for. This was the training requirements. We’re humans. We naturally have biases we may not even recognize in and of ourselves. Hence why humans in the loop, a committee, a governance, could catch that. Another one that had deadly consequences was an organization that was looking at self-driving cars.

robotic drivers essentially utilizing capabilities of AI. And so even with all the training and even with all the safety equipment, again, without looking at the question of, we be doing this?

Can we really assure safety at 100 % at this point? Maybe this is something that should stay in development till we have assurances. What are our legal obligations if something goes wrong? And assuredly, something did go wrong. In fact, the car was not able to safely be able to distinguish between the curb, sidewalk, the road, went up and actually struck and killed a pedestrian.

Patrick Spencer (36:36.214)
yeah.

Aaron W McCray (36:40.3)
These are the kind of the ethical things that are at the extreme end. I get it. But it still kind of drives home the point that if you’re not putting that framework together before you move forward, it’s like driving that Ferrari without all of those enhanced safety, you know, kind of features and benefits that will protect you. It’s OK, drive fast, see what happens. Take change. Or as my mom used to say, all right, drive fast, take chances.

and we won’t see you tonight for dinner. That was her way of saying, use your brain, be smart, think about your decisions, think about the consequences, that’s the ethics of your decision making. So there’s my philosophy for you right there.

Patrick Spencer (37:23.768)
My mother always said, well, you’ll be riding the school bus for the next two months. That was enough of an incentive to drive safely. I I have one or two more questions for you here and then I got to let you go. But, you know, when you look at the AI transformation that’s happening, you compare it to, you and I have lived through.com. We lived through the financial crisis and the evolution of technology over last 20.

Aaron W McCray (37:28.11)
With a helmet with sticker with your name tag possibly

Aaron W McCray (37:49.134)
Sure.

Patrick Spencer (37:53.518)
25 years. What makes this so much different is that this, you these ethical aspects, because there weren’t as many ethical aspects, you know, when we talking about, you know, connecting to the internet, you know, now you have a computer that’s connected to the internet, you can browse it, can send email, and you can do other things. But now, there’s a whole different range of things that come into play that just didn’t exist 25 years ago with some of the things that we saw happen back then.

Aaron W McCray (38:01.986)
Right.

Aaron W McCray (38:20.974)
You know, that’s a great question. Sure, ethics are a huge part of it, but we can’t escape looking at the technology itself and its autonomy, right? Autonomous agents are terrifying. Look.

I consider myself to be a fairly reasonable, well-educated individual, but by no means do I consider myself to be the Einstein of everything that AI could potentially do. And therefore, when I train it and then turn it on from an autonomy perspective to do certain things, feel 100 % confident and it won’t go off the rail.

No, I just, I don’t believe that. And I strongly think that we need a much deeper and better understanding collectively as an industry of the capabilities. And again, without a defined kind of industry standard and something that people are gonna hate this term, regulates it.

to a certain extent, we’re gonna have concerns that the technology itself is gonna outpace our ability to keep up with it and either use it for good or prevent it from doing destructive harm. That’s really my concern and why this is such a game changer. Look.

We always knew that people that were getting into dot com, some of them were just doing it for greed to make an instant buck. Well, what threat was that to me? Not much if I didn’t invest in them, right? If I wasn’t buying their services. But this, this impacts everyone. You could be a stay at home grandma, not doing much and have a basic phone and everything else and you’re still at risk, right? So yeah, that’s where my head is and why I’m so concerned about what it’s doing to the industry today.

Patrick Spencer (39:57.709)
Thank

Patrick Spencer (40:07.502)
Yeah. Very, very valid. Well, you know, we only had about 45 minutes, unfortunately, for this podcast. You and I could go on for a lengthy amount of time talking about all these aspects, but, know, sort of wrap things up. thought I’d pose a question around, you know, what can CISOs avoid that they shouldn’t do here in the next, say, year? You know, what be one aspect, you what should they not do? Use a double negative.

Aaron W McCray (40:27.854)
Mm.

Patrick Spencer (40:37.642)
And then what should they do to be successful? What advice would you give them?

Aaron W McCray (40:42.574)
That’s another great question, right? So we’re talking about the credibility, right? We got to move beyond just being a manager or department head kind of figure that is looking at, you know,

key risk indicators, key performance indicators, and just, you know, same luck. We knocked out 10,000 vulnerabilities last month. Great. Ultimately, I’m not sure that really drives value back to the business. And if you continue to base your decisions on the kind of fear, uncertainty, and doubt, or what we call the FUD factor, you’re gonna lose credibility at the C-suite. You’re not gonna be considered one of those business leaders.

We have to shift, we have to change. The technology is forcing us to look at things differently, right? So for example, what’s one thing that a CISO can do right now as they’re evaluating, let’s say AI technologies that can help to earn trust is when the business comes to them and says, hey, Mr. CISO, Mrs. CISO, we want to do this. What do you think? Right?

The answer can’t be we’re the stock gate and it’s no, it’s it has to be transformative in the sense that, hey, that’s a great idea. Yes. But if we do these things right, we’ve got these guardrails in place. So I’m no longer blocking this kind of this innovation of what we’re trying to do from the business. What we’re really saying is let’s put those guardrails on it and let’s move at speed. We’ve got to wrap it in the sense that we don’t put the business at risk, our consumers at risk, right? Our systems, our data at risk.

When we deliver the messaging like that we transform the way that we’re viewed I Harken it back to when I was first a first see so Nearly 30 years ago. We were considered somebody actually called me this you guys are a black box organization Information goes in nothing comes out

Aaron W McCray (42:39.328)
We cannot be like that anymore, right? We have to be the ones leading the conversations and talking about how we can do this and directing and providing guidance to the business on the appropriate way to do this. When we do that, we’re gonna be looked at and viewed as someone who is a contributor, somebody who is absolutely necessary to the next evolution of the business. I’ll leave you with this.

This is something we focus on as field CISOs at CDW. It’s part of our think tank, it’s part of what we do. And if any CISO listening to this is very interested in advancing their career about some of the things we discussed today and how do I become that CISO 3.0, let me recommend to you a book by my colleague, Walt Powell.

And you can find this on Amazon and I’ll send a link over to Patrick and he can share it. It’s called the CISO 3.0, a guide to next generational cybersecurity leadership. It is absolutely critical that we change our thinking about our roles and responsibilities if we’re going to be effective in 2026 and beyond. And we didn’t even talk about post quantum computing and encryption yet. I’ll leave that there for another podcast.

Patrick Spencer (43:46.422)
Yeah.

Patrick Spencer (43:55.758)
Sorry, let me make the dogs barking. Hopefully you’re not picking this up.

Aaron W McCray (44:02.53)
Well, we can hear them, it adds a little bit of humanity back to the conversation. So it’s OK.

Patrick Spencer (44:09.026)
Hopefully the she barks the vacuum. Sorry. Hopefully this is it or I’ll have to go and plug it. I apologize. It’ll wrap things up. Well, that’s that’s fabulous for it. I’m going to put a link to that book in the in the summary of the podcast. So anyone who would like to check it out, you can click on that and you can go purchase a copy on Amazon. Thanks for that recommendation, Aaron. For organizations who are interested in engaging with you and your team.

Aaron W McCray (44:31.886)
Let’s hope.

Patrick Spencer (44:36.312)
to do that strategic analysis to evaluate what their risk posture looks like. Should they reach out to you on LinkedIn? How best to contact you? And for that matter, to reach out to you about the partnership that you have with KiteWorks for that matter.

Aaron W McCray (44:51.918)
Absolutely, know, LinkedIn is a good way or they can just use my first dot last name at CDW.com and just email me directly. If I’m not here’s the thing. I learned this a long time ago from a mentor who said, check your ego at the door, right? This is a business decision. We’re all in it together. What does that mean? I may not be the right person, but I will find the right person, right? We’re an organization of nearly 15,000 experts and

You know, I’m the right person for data security posture management and how we’ve partnered together with Kiteworks to help organizations secure their data and their data exchanges and collaboration with data and how it impacts AI. But you may be going, Aaron, I’ve got this problem and that may not be me, but I’ll get you to the right person. I promise you that.

Patrick Spencer (45:39.298)
That’s fabulous. Aaron, thanks for your time today. Wonderful conversation. Our audience is definitely going to find this podcast engaging, thought provoking and helpful in their day to day jobs. So hopefully we can have you back in the future.

Aaron W McCray (45:52.802)
I would love that. Thank you for having me today. Hopefully to your point, they derive some value out of this today. So thank you again for having me.

Patrick Spencer (45:59.092)
Absolutely. And for audience, check out other Kitecast episodes at kiteworks.com forward slash Kitecast. Look forward to having you listen to our next podcast.

Get started.

It’s easy to start ensuring regulatory compliance and effectively managing risk with Kiteworks. Join the thousands of organizations who are confident in how they exchange private data between people, machines, and systems. Get started today.

Share
Tweet
Share
Explore Kiteworks