AI adoption is forcing a fundamental shift in how data centers are built and operated. GPU-dense facilities, massive power requirements, and automation are pushing organizations toward shared AI infrastructure that looks very different from traditional enterprise or hyperscale environments.
With this shift comes a new security reality.
The most sensitive data organizations possess—intellectual property, models, simulations, and decision logic—is now processed in multi-tenant AI data centers that depend heavily on wireless connectivity for monitoring, orchestration, and control. Understanding the emerging threat landscape is essential before it becomes a problem.
This is the first webinar in a multi-part series examining AI data center security from first principles through operational reality. The series is educational and discussion-focused, designed to help organizations understand emerging risk before it becomes a problem.
What You’ll Learn in this Webinar
- Why AI data centers are becoming strategic and sovereign assets
- How AI workloads change the threat model for data centers
- Where traditional security assumptions break down
- Why wireless visibility matters in automated environments
- How organizations are thinking about risk before incidents occur
- Oracle’s deployment of Bastille to monitor their AI Data Centers
Who Should Attend
This session is designed for professionals responsible for AI infrastructure, data center security, and enterprise technology strategy, including:
- CISOs and cybersecurity leadership at organizations deploying AI infrastructure
- Data center operators and facility security managers
- CTOs and technical leaders evaluating AI deployment strategies
- Physical security and operations teams overseeing critical infrastructure
- Enterprise architects and infrastructure planners
- Risk management and compliance professionals in AI-intensive industries
Speakers
- Justin Fry: CMO, Bastille
- Dr. Brett Walkenhorst: CTO, Bastille
- Paul Calatayud: CEO, Voltscape
Video Highlights & Takeaways
The Rise of the “AI Factory”
Paul and Brett discussed how the physical and architectural demands of AI compute are transforming traditional data centers into specialized “AI Factories.” This shift is driven by the extreme density of GPUs, which generate heat that traditional air cooling cannot manage, necessitating liquid cooling and massive power infrastructure.
- Gigawatt Scale: Projects are reaching massive scales, with Paul noting current involvement in projects up to 7 gigawatts.
- infrastructure Changes: The move to high-performance computing involves specialized networking (e.g., Mellanox 400/800 Gbps) that often bypasses traditional operating systems and firewalls, creating new security blind spots.
- Dark Data Centers: There is a trend toward fully automated, “lights out” facilities to minimize human presence, paradoxically increasing reliance on wireless communication for robotics and sensing.
Data Sovereignty and High-Stakes Assets
The conversation highlighted that the assets protected in these facilities have shifted from replaceable web content to high-value proprietary models and national secrets. The loss of a trained model or dataset is now comparable to losing a pharmaceutical formula or military secret.
- GPUs as Weapons: High-end GPUs are now regulated under ITAR (International Traffic in Arms Regulations), classifying them effectively as weaponry.
- Nation-State Competition: An “arms race” exists between nations to control compute power for economic stability and information dominance.
- Intellectual Property Value: With training costs for models like GPT-5 exceeding $5 billion, the model weights and training data are prime targets for theft.
The Wireless (RF) Threat Vector
A central theme was the “dissolution of the perimeter.” As digital network defenses (firewalls, zero trust) become more robust, attackers are increasingly using the Radio Frequency (RF) spectrum as the “path of least resistance” to bridge air gaps and bypass physical security.
- The “Air Gap” Myth: Physical isolation is insufficient because wireless protocols (cellular, Bluetooth, Zigbee) can penetrate walls or be introduced via compromised supply chains (e.g., a rogue chip in a lightbulb).
- Multi-Tenancy Risks: In shared data halls, a neighbor’s insecure wireless device can provide an attack vector, as radio waves do not respect cage boundaries.
- Proximity Myths: Attacks do not always require close physical access; Bluetooth vulnerabilities can be exploited from over a mile away, and cellular backhauls allow remote exfiltration.
Real-World Attack Scenarios
Dr. Brett Walkenhorst shared several anecdotes illustrating how hardware and wireless implants utilize the “invisible” spectrum to compromise secure environments without touching the corporate LAN.
- The Server Rack Hotspot: A rogue cellular hotspot was found in a server rack, connecting to a client for short bursts to exfiltrate data directly to the cloud, bypassing all enterprise firewalls.
- The Casino Fish Tank: A well-known breach where attackers entered a network through an insecure, internet-connected thermostat in a fish tank.
- The ATM Implant: A Raspberry Pi with a 4G modem was discovered hidden in a bank’s ATM network cabinet, providing persistent remote access.
- Nearest Neighbor Attacks: Attackers who are blocked by MFA can compromise a neighboring building’s Wi-Fi or IoT devices (like smart speakers) to bridge into the target network.
Strategic Takeaways and Solutions
The panel concluded that attempting to physically block RF signals (e.g., Faraday cages) is cost-prohibitive and prone to failure over time. The industry standard is shifting toward active RF visibility and analytics.
- Visibility is Key: Operators must actively monitor for unauthorized wireless signals (cellular, Wi-Fi, IoT) to identify rogue devices immediately.
- Shielding Limitations: “Wrapping a data center in foil” is impractical because seals degrade and utilities require penetrations, making RF shielding difficult to maintain.
- Industry Adoption: Major players are recognizing this gap; the webinar announced a global collaboration between Bastille and Oracle to deploy wireless detection across Oracle’s AI cloud infrastructure.
Full Video Transcript
Thank you so much for joining our webinar today. My name is Justin Fry. I’m the CMO here at Bastille. This is the first in a series of webinars on AI data center security.
AI data centers and the new security landscape is the topic of this first webinar in the series. We’re very lucky to be joined by Paul and Brett. Paul’s an advisor to Bastille. Paul, CEO of Voaltscape, a next generation data center and energy development consultancy supporting hyperscalers, AI GPU platforms and sovereign digital infrastructure initiatives.
Paul is a twenty six year cybersecurity and infrastructure leader. He began his career in the US Department of Defense and has served CISO at Palo Alto Networks and is in fact a six time Fortune five hundred CISO. We’re also joined by Brett. Doctor Brett Walkenhorst is the CTO of Bastille.
He leads all the R and D efforts and works closely with all of our government and commercial customers. Before working with us, he was the Director of the Software Defined Radiolab at Georgia Tech. So thank you so much for joining. And we’ve got a couple of minutes into this.
We’ll keep the polls up for a bit. I’m going to go over to you, Brett and Paul.
Thank you so much, Justin, for that kind introduction.
Yeah, thank you very much. We’re looking forward to the discussion.
Paul, I think maybe one of the first things to talk about is what’s new about AI? Why does this matter? What’s what’s changing here? And we can get into some details.
Yeah, I think it’s it’s an interesting situation. Quite honestly, if you would have asked me maybe five years ago, we’d be in, you know, on this call having this discussion, I would say, why it doesn’t make sense, you know, The data center market is exists.
But I’ve been on the trenches. I’ve built an AI data center from the ground up, and I’m continuing to build data centers around the world.
And so what I’ve observed and why this matters and what’s happening is a couple of things. One is let’s just talk about what’s what changes, you know, what you know, in in a in a GPU AI world, you know, maybe called the AI factory. And that has to do with the power density, the cooling. Like, it fundamentally creates a large stress on the grid infrastructure, on the traditional facilities.
And as a result, it generally means that things need to change design wise architecturally speaking. And that becomes an opportunity, quite honestly, in terms of, like, how do you change? Do you just make it bigger or do you fundamentally bring in things like direct liquid cooling and just fundamentally change the way you might have done it before? And you have to do that because you have scale challenges that you need to address.
So that’s one thing.
Paul, is that cooling requirement simply being driven by the intensity of the operations that are being conducted, that are being done by AI? Is it a greater density of compute power? Like what’s driving that?
Yeah, both, but it’s definitely the density, you know, of the racks themselves. The amount of heat that these GPUs generate just cannot be transferred with air. Even if you even if you had, you know, a giant air conditioner just blowing as much air into that, you know, rack as possible. It wouldn’t be enough.
So you need to bring in water cooling, you need to bring in you know different types of thermodynamics. So that’s one component of it. The other thing and I think this is my probably more equally important and probably the one that I didn’t understand you know it’s not very something you don’t really see, and that is the sovereignty and sensitivity of what’s what’s occurring in these data centers. So even in organizations that might have a public cloud infrastructure strategy today, and even if that public cloud infrastructure, you know, has the ability to handle, you know, the AI workloads, the reality is the sensitivity of that information is changing.
It’s not a website. These are these are this is some of the largest datasets and proprietary information that an organization has because they’re trying to take advantage of millions and billions of dollars of infrastructure that they might have just invested in to realize some sort of angenic AI LLM outcome. And so as a result, the classification, the sensitivity, the amount of criticality in terms of the information is also somewhat of a paradigm shift from a website that could be hosted anywhere because it’s my marketing website versus it’s my entire proprietary formulaic approach to how I do DNA for therapeutic and different industries.
It’s the most sensitive information.
Yeah. That makes a lot of sense. So it sounds like wireless is a piece of this, right? So as sensitive data is moving into the cloud being accessed by these AI algorithms, how is wireless being deployed? How is it being used in these environments?
Yeah, I mean, so that I would say that’s kind of the third evolution. And this is one that has to do with the fact that these are critical infrastructures, these are critical assets and security is not being ignored. Right? And so as a result, you’re I’m observing very sizable investments in cybersecurity around this the protection sovereignty and data integrity.
But I think it also creates, you know, this this RF landscape that needs to be evaluated because, you know, back in the day, the website was able to be protected by, you know, simple firewalls and things of that nature.
Then we got into more complicated east west kind of three tier architecture, software defined networks. We brought in virtual firewalls and defense in-depth. And to some degree that that’s that’s the architecture that is cascading into AI, But the amount of horsepower, the amount of throughput, the amount of information that’s being exposed, and to what extent these APIs are being exposed, you know, creates what I would consider kind of this third, evolution of threat, which is the physical side of it. And I think that’s where the RF landscape becomes important because as it becomes more and more difficult for adversaries to find their way into these critical infrastructure, know, as you zero trust and as you segment it, they’re going they don’t give up.
They just try to look for the path of least resistance. And I think as as we look at these modern data centers that have tremendous amount of information flow, tremendous amount of multi tenancy, tremendous amount of things occurring east west and lateral, there’s this idea that, you know, the network is just one way of accessing that data center. Yeah. And I think that creates this path and this conduit towards is RF a means to getting access to the sub layer of the network?
And the answer is most likely yes. I’m sure you have more stories. I’d love to hear examples.
But in my world, again, based on my background in military and cyber defense and warfare, you have to look at the entire spectrum of adversary all the way from insider threat to credential compromise and everything in between is needs to be considered.
Yeah, you made an interesting comment about the path of least resistance might actually be this wireless domain in some cases, as we’ve done a really good job of securing other aspects of our network connectivity, we are seeing indications that adversaries are going after the wireless environment. So yeah, we’ll talk more about some of those stories in a bit. But alright, so that helps to frame the discussion. Let’s see if we could go forward a little bit. Maybe we’ve already talked about some of the stuff on this slide. Is there anything else that you’d like to say here, Paul?
Yeah, I think what I would just generally say is, you know, as these AI factories become kind of the way that GPUs and AI gets done, I expect that these workloads become more general purpose. Today, they’re very AI centric.
And I would say that, Paul, if I’m gonna spend a billion dollars trying to build an an AI factory for training AI, what else can I do with it? So so what I’m also seeing as a result of this concept of an AI factory is an evolution of of AI workloads, neo clouds, inferencing on the edge. And so I also wanna kinda paint a picture that this concept of these even though we talk about them in the in the gigascale, you know, gigawatt type capacity that which is true. I’m building you know, my largest project is seven gigawatts.
On the other hand, on the other spectrum, I’m having conversations about how do we bring AI to an oil rig.
And that oil rig is sitting in a very ruggedized environment, but the the the ability the the value proposition of that AI model to accelerate and transform the business, you know, is driving different form factors of AI. And, if you look at the situation, it’s not gonna be a single data center in Fort Knox with a bunch of security guards. You could walk up to this thing and and and, you know, and do a lot of different potential harm to it. So we definitely have to be thinking about the physicality of these things and and and and again, the idea that there’s gonna be a whole lot more motivation to get access to information that is of such a business critical value.
Yeah. Paul, you referred to a data center project that you were working on, but you have a you have an extensive set of experiences in this domain, that I’m not sure we adequately captured at the beginning. Could you maybe just take a minute to talk about some of the things that you’ve been doing in terms of what just the data center space in general, but what you’re working on today in terms of building out data centers.
I think that that kind of experience is super relevant. And it’d be nice for our audience to make sure they understand where this perspective of yours is coming from.
Yeah, I do think it’s important. So, you know, did the whole cybersecurity kind of operational role then started moving into entrepreneurship and built a cybersecurity insurance company with Joe Longsdale and Palantir.
And then from there, I saw the need for GPUs because I was starting to access this hardware and finding it very difficult.
And so a little bit, just kind of my nature of who I am, I said, look, I can’t find this, so why don’t I go build it? So that started the hypothesis, which became a company that I was CEO and founder of called AmpZ, AmpZ. Energy. And AmpZ. Energy is a one thousand acre location in Ohio with GPUs that we provisioned and we built a GPU as a service cloud.
And as I was building it, you know, there was companies like CoreWeave and companies like Oracle who, you know, our customers of of Bistill for a reason that we’re starting to look at, you know, the need for this threat and I started to go down in even more kind of security strategy because one of the largest customers that I was going after was the US government.
And so I was starting to think about how do I lock this thing down to military grade mill spend. So that was my experience at AMC. And then at Boltscape, what I’m working on now as CEO is taking that experience and and being a little bit more of a consultative side versus an operator.
So I’m building probably twelve AI data centers around the world at this point.
You know, for nations. So Mongolia, Dubai, India, some in the United States, some in Canada, the clients are hyperscalers, and sometimes the the government themselves because, you know, these AI factories have a sovereign component to em where certain nations want to have control over the influence of what the LLM does because the LLM is probably the best insider that you could if you can manipulate that model, you could control the outcomes of a lot of different businesses and that’s a very, very important thing. I was in Davos and you know, you I I might as well have been at the RSA conference or the Nvidia GTC conference because I’m this is my first time there, so I didn’t have too much of an expectation, but this is more of a political kind of nation of conversation.
But everyone’s talking about AI in cybersecurity. That was the two conversations that took ninety percent of the entire conversation was everyone’s thinking about this problem of what does AI do in terms of economic opportunity, but what is its negative effects and how can it be influenced? Which then gets back to the compute, which then gets into the actual AI factory themselves. So that’s my background and kinda what I’m working on.
So I’m in the thick of this right now just trying to solve, you know, the entire adversarial kinda threat model of how we need to be evaluating and protecting these these assets. And it’s a bit like a gold rush. We’ve seen in cybersecurity, I’ve seen this time and time again. We build a website, we we put it on the Internet, and it’s compromised.
Then we put firewalls and we figure it out. We go into cloud infrastructure, We find out misconfigurations of problems. We buy companies. And I again, was CISO at Palo Alto.
So you buy Palo, you buy Wizz, you buy some of these CSPM companies, and you solve your infrastructure, DevSecOps. We’re in that we’re in that paradox right now on AI. We’re we’re infrastructure at rapid space. We’re realizing that MCP and and genetic AIs and APIs have vulnerabilities and LLMs have supply chain risk.
And that’s the world we live in today in terms of conversation. We’re starting to kinda open up our aperture to the reality is there’s vulnerabilities in this entire supply and the compute and critical infrastructure I think is probably the biggest area that isn’t really being discussed. Why I wanted to have this conversation with you is it needs to be something that we highlight to the industry for consideration and make sure it’s not a blind spot for my fellow CSOs.
Yeah, excellent. Thanks, Paul. I appreciate you giving us a little bit of background. The main takeaway for me and I want to just highlight this is you’re not some smart guy having discussions with high level people in a tower somewhere. You’re you’re in the trenches building this stuff. So you’re you’re getting exposed, and you have been exposed as an operator, but now as more of a consultant role role to all of the complexity involved.
And that includes the security domain, a security aspect of building and maintaining and operating a data center. So you brought up nation states and sovereignty issues. And I know we’ve got some material on that later. But maybe maybe we can talk just briefly right now about like, what is the interest in terms of nation states? You’ve already kind of mentioned it. But how does that play in terms of people’s focus on security?
And the kinds of use cases that come in come into play. And I’m particularly interested in, I know you have a military background from many years ago. And so you’ve been exposed to security from that side of things. And you kind of know how, how militaries and defense operations have to operate, and they have a heightened sense of security. And some might use the word paranoia, but it’s also very appropriate in the environments they operate in. So like, how is that affecting the overall sense of what needs to happen to secure these facilities based on the attention that’s being paid by nation states and nation state actors?
Yeah, I mean, I think the first place that realized the significance of this is when I’m signing when I was CEO of Ampsey and I was signing a document that maybe a lot of people aren’t familiar with.
And it’s ITAR. And ITAR stands for International Traffic and Arms Regulations. I worked at BA systems and I was in charge of governing ITAR for a defense contractor. But ITAR is a regulation that basically says, hey you’re a weapons dealer and you need to make sure that when you manage this asset that you don’t sell it to the wrong nation states. That’s literally what that regulation is. Here I am buying GPUs and having to sign that document.
That sets the tone for a reality check when you basically become a weapons dealer because GPUs are considered a weapon.
Interesting. In terms of the US government, that’s the reality.
And if you start to kind of opine on well why is that, what’s driving that? You start to understand the political nation dynamics which is that AI has a very powerful, not kid you not, weapon that can be leveraged in terms of all sorts of different things. We’re just, again, we’re just scratching the surface of what an an adversarial AI bot can do to us. We’re no longer dealing with humans. It’s it’s gonna be AI warfare.
Yeah. And as a result, there’s a lot of nations in an arms race to procure and and and control the GPUs that power the LLMs which influence this adversarial machine. And so at an at a at a nation state level, what’s occurring is a couple of things. One, it’s an arms race to get to get control of of these assets.
The second, it’s also somewhat an arms race to lock in economic future development. If you look at certain developing countries, you know, the idea of trying to create economic stability on infrastructure is evolving into if I can become an AI powerhouse, can I create new economies for my nation? The answer is yes. And so now it’s about economic stability and it’s about job creation and creating new economies.
So there’s a big economic positive to AI. In nations that today may not have the ability to do that in a more traditional sense. But I’m building a data center in Mongolia. Why?
Because Mongolia has some of the largest pieces of land and solar and and energy, you know, development opportunity and if they put an AI data center that needs a gigawatt, you’ve just trip, you’ve just doubled the amount of energy the entire country can produce overnight.
Not gonna happen with natural population growth. So that’s what’s triggering some of these things. The other gets a little bit more into, let’s call it, competing nations in terms of superiority.
We can pick on China, for example. So when Deepsea came out, right, was a model and you asked it questions around how it thought about world views, it was skewed towards a certain political lens or historical view on influence.
Yeah. And and it was again another highlight of the reality which is we’re living in a world where a lot of misinformation, disinformation and just the way we think and process facts is influence it can is now gonna become influenced by AI. And so if you’re a nation state, you don’t want that to be a western influence if you come from a certain part of the world and vice versa. If you come from United States, you don’t want your entire AI to be powered by a foreign LLM.
So the nations are starting to think about how they create their own versions of ChatGPT.
But if I look at it from a different perspective, I’m trying to create that sovereignty for my own control. But I also want to be able to create an offensive approach that might influence other people’s models to my agenda. So it’s kind of a bidirectional electronic warfare that is just starting to become a new battlefront. This this AI war if you will.
That’s interesting. I hadn’t thought of it in terms of electronic warfare but that does make sense. This this whole concept of nation states being involved so heavily is not something I would have thought of even just a couple of years ago. But in hindsight, it’s making more sense, particularly as I listened to you describe it. One thing that has become more clear to me quite recently is in discussions, discussions that we’ve been having with data center operators and customers of data centers, is that they are seeing a significant increase in nation state activity in the form of attacks on their networks and on their infrastructure much more than I would have expected. That kind of interest points to the power of the intellectual property and the value of it for for all actors. So I think that plays as a subset into the that overall conflict that you described, Paul, but it’s kind of an eye opener for me to hear about some of those kinds of activities.
Yeah, I mean, you know, I had the benefit that my role at Palo Alto Networks was was multifaceted.
I had some responsibility for the Unit forty two which was our internal threat research organization and we had an instant response team as well. So I got involved in daily breaches with our clients, our customers. That was one of the things we would do is parachute in and evaluate whether or not our product failed our clients. I will say that not a single one of those failures occurred under my watch, which is fantastic.
It’s a good testament of the product efficacy. However, people misuse the product. So that’s a whole different conversation for another day. But what I observed through that was a simple one where if you look back and say, if I’m building a vaccine and that vaccine requires clinical trials and formulate design and billions of dollars in R and D only to watch that formula walk out the door in terms of a cybersecurity breach.
That’s the analogy that exists with LLMs. An LLM is not cheap to bake. If you look at how how expensive it was to create ChatGPT, would argue it’s probably north of five billion dollars of of GPU tokens to build version five of that model. So last thing anyone wants is that entire model to just go out the door.
So intellectual property theft happens in kind of two forms. One is the models themselves at the cost of developing those models.
And the second is the IP that is being used to train the models. Now most of the the foundational models are are built off of public data. So you can argue that public data is easily accessible and that that’s not wrong.
But still the cost of, you know, acquiring it and and gathering it. But the reality is is that you’re getting into private models and hybrid models, you know, and and those are gonna be very much on proprietary IP sensitive data as raw data, let alone trained algorithm output.
So so in other words, if I could take a model that allows me to build my own pharmaceutical formulas, I don’t need your data anymore. I I have the model that can create my own data. So that’s the power of these LLMs.
But it’s also the risk that I think is important in terms of just protection of of the underlying infrastructure that these things are housed in.
There’s another aspect to the IP that’s involved here and I think maybe we should move ahead a little bit. I don’t know if there’s something we want to say about this slide here.
But there’s I think we hit it.
There’s a slide, I think maybe this one here, where it talks about the sensitivity of data that operators or clients are using, right? So you talked about the models, you talked about the training data, but then data that’s being fed into these algorithms for operational use is also increasingly sensitive relative to data that used to be pushed to the cloud. Can you talk a little bit about that and maybe in the context of some of these verticals, how that how that’s being utilized and what the risk is associated with that data?
Yeah, I mean, first one we kind of hit on right when I was talking about kind of formulaic, you know, approaches to, you know, vaccination discovery.
Yeah.
The other one, I mean, the other ones are going to be, you know, financial services and and health care, and and you look at pretty much every industry, there’s going to be some level of AI adoption. The question becomes how how how sensitive is the is the nature of the business itself.
But I also think the sensitivity increases as a result of the need for AI is a very hungry hungry animal. It wants data. So even in organizations that might not have seen value in extracting personal identifiable data, The AI model might actually want that. So if I’m in even in retail, if I’m in a retail environment, I want to do predictive consumer analytics to determine the likelihood of what I purchase. If my DNA can be profiled in a way that creates a predictive model, retailers are going want that DNA. So I really think that most industries are going to default to highly sensitive information, whereas today it’s more industry driven. Financial services has personal bank accounts and financial transactions.
Healthcare have personal identifiable data. Retail has PCI and credit card data. I don’t think that industries are going to be so siloed in terms of data sets in the future. I think it’s going be just a broad stroke of data that most organizations are going to want because AI is gonna demand it for, you know, whatever business outcomes they need it for.
So that’s one way to look at it. But today, you know, going from left to right on this on this slide, you know, financial services, you know, every one of these industries is evolving. I mean, today, we talked to a a a banker about financial decisions. In the future, it’s gonna be an AI bot that’s looking at all of your financial transactions and then and is asking for that information or has that information to make decisions on whether or not you’re a risky client, predictive default on fraud detection, everything’s just going to expand.
So as a result, you’re going to have a lot of sensitive information in a lot of different hands than it is today.
Yeah.
I think to me that’s one of the most interesting aspects of how things are changing as they’re evolving from traditional to AI data centers.
The data that is needed to be hosted in the cloud in order to execute whatever use case you have to be, you know, to have AI operate on it requires the presence of that data. So now everything that’s being hosted in this data center is enormously more valuable than it was previously. Not to mention, as you said before, the AI models themselves, the weights that the other aspects of them, the training data associated with them.
All of that stuff is incredibly valuable, which speaks to the desire for nation states to have access to it and other attackers who might be able to leverage theft of that IP to create some financial gain for themselves.
And then I think like in a in a more historical paranoid view, you know, I might say as the CISO of a, a defense company or a financial services company, we’re a bank and all my data never leaves my data center. But the reality is that you’re not going to put a GPU factory at one gigawatt scale in that data center today. That’s why I wanted to talk foundationally about why these data centers are changing and the pressure it puts on existing physical security defenses. The fact that even if that defense works today, it doesn’t scale because the data is going to start moving. So you have to be thinking about multi tenancy, about API exposure and ultimately where does that data go and is that data have the same level of security that it did when it was on the physical?
Yeah, I want to make sure we touch on multi tenancy. I can’t remember if it’s here. Oh yeah, is. This is perfect.
Okay. Yeah. Let’s go ahead and continue that discussion. Because I think leading to, starting from I house everything in this data center like I am the operator.
I own all that data to now AI data centers where the cost of operating is huge. The there’s there’s data center provider, I’m sorry, AI algorithm providers that are that are being hosted in these centers. And then there’s there is the need to have many different organizations being able to access that. Their data goes into the cloud.
Now you have this multi tenancy kind of approach, which I guess there’s always been some of that, but it seems like it’s this is the way it’s going.
Like everywhere in AI and now it’s not so much that you can control your physical environment, it’s your leasing space next to someone else. It’s all kind of in the same data hall, maybe you’ve got a few racks and someone else has a few racks now. How much do you trust your neighbor is kind of the question, right? The physical access is more concerning because you can’t lock everything down just within your own organization.
Yeah, I mean I do think that gets into part of the evolution here, which is that these aren’t simply just changes in terms of the threat landscape in terms of like the criticality asset changes. So as we have that variable we’ve discussed, the infrastructure changes causing potentially organizations that maybe were air gapped in the past to need to traverse the cloud for the first time to drive an AI initiative. You start to gather all of that and then you get into the situation which is security has matured to a point where might be very difficult to get access digitally to those environments. So what’s the what’s the what’s the next best thing?
And I think that’s where you get into making sure that your your your neighbors are are not threat vector that could create wireless surveillance or wiretapping or some things that can maybe be out of the bounds of a digital landscape today that you don’t think about.
Yeah.
Yeah, we’ve we’ve for many years been talking about the disillusion of physical boundary. Talk about perimeters and perimeter security was all the rage, you know, decades ago, and firewalls were the way to go. But as as we start to see the propagation of attacks, and it’s, it’s not a matter of if it’s a matter of when and then it’s not even a matter of when it’s a matter of how much is it happening right now, because it’s just you just have to assume that an attacker has penetrated your perimeter. And then how do you deal with that?
And then you have the evolution of zero trust and all these things we come to today. Now, this idea where the physical boundary doesn’t seem to exist at all. I mean, you have a physical boundary for a data center, but there’s tons of tons of organizations that have access to that. And the physical environment can be compromised.
You, you know, you can bribe somebody or get a part time job or something like you find your way in to implant something useful physically that gives you access. And the other aspect that we often come across in talking about wireless security at Bastille is there is this bias towards the need for physical proximity.
And that is true. You need physical proximity to connect a wireless link, but it’s also somewhat false. It’s a false narrative, a false assumption because there are so many wireless devices all around us that really you just need access to one that’s close enough to conduct a wireless piece of an overall attack chain. And that’s been demonstrated through some stories.
Maybe I’ll share one in just a minute. But this kind of environment, this paradigm for AI data centers, I think speaks to those those myths. It’s dissolving the boundaries, the physical boundaries, and having dissolved them. And this multi tenancy approach means that adversaries can very easily get close enough to do damage with wireless.
And we’ll talk in a minute, I think a bit a bit more about how wireless is utilized in different contexts. But but that’s that’s the thing that keeps coming to my mind. And I just I wondered if you had any thoughts on that, Paul.
Yeah, it does. I want to maybe questions, but I see some chat questions. So there’s there’s one in the chat around, you know, how is this different than maybe a hyperscaler or, you know, a SaaS public cloud infrastructure and how these threats exist today. So I think it’s it’s we can stitch that question into the into the into the conversation right now.
I mean, fundamentally, it’s not it’s not different. The idea of having colo in racks, you know, has been around for two decades. I’ve worked at Savus data centers and rack and stacked and in multi tenant data centers for two decades.
But the motivation and the desire to get that information in a more physical kind of landscape, I think is becoming more and more of probabilistic threat. It’s always been there. When I do threat models and I do kind of risk registries and all this stuff, I never close a threat. Like when I used to present to my board, I would never say, hey, a threat’s addressed.
It’s now closed. The probability of the velocity or the likelihood might have decreased, but it’s never gone. So in this case, you know, multi tenancy, VM escape, mean, there’s so many different conversations around this that have occurred in in my career. They’ve always been there.
But why does it become more important today? Couple things. One is the infrastructure is different.
If you if you opened up a GPU or an AI farm and you looked at the network, you’re talking about networks that are not TCP IP based. You’re talking about Mellanox four hundred, eight hundred gigabit per second AI inferencing training compute that never hits an operating system. It’s GPU to fabric to to, you know, to to storage. It’s very interesting different technology. And as a result, it also means some of the traditional technology doesn’t exist. You can’t put a firewall in between an eight hundred gigabit per second training model. It’s like, it’s impossible right now, at least.
So there’s some gaps. Like some of the technology doesn’t move over.
So that’s one thing to look at. The second I think is again back to the motivation. If have if this stuff is very difficult to get access to but it’s highly valuable, more valuable than compromising GoDaddy and trying to do the same thing.
I think it just creates more sophisticated attacks on areas that maybe were always a problem, but are now more statistically probable to occur because the outcome is worth the effort. That’s a way to look at it.
So that answers the heart of the question.
But I also think it kind of answers maybe the questions we’re discussing around like, is this a problem? It’s not a new problem.
War driving and wireless attacks have been around for a long time, right? I’ve played around with my own versions of Raspberry Pi and pineapples and now you’ve got, know, my son plays around with different technology that I’m probably trying to look for in my I have one in my with the RF signaling because you can clone key fobs and you can do all sorts of stuff.
This is Oh you’re talking about like flipper zero or something?
Yeah, flippers and things like that. Thank you.
Things are starting to become more accessible, right, is one way to look at it. And things are becoming a lot more probabilistic.
That’s how I’d kind of equate And valuable, right?
So that shifts the focus. Maybe the threat was already there, but now the focus is intensely on this area where it was wasn’t really before.
Correct.
There’s another question that maybe this is a good segue in because we started to introduce signal attacks, right? Signal attack vector as kind of the main discussion point on So What?
So there’s a question there around megahertz. So that that looks like it’s a question for you.
Yeah.
More than me. Even though my my my job in the military was actually signal intelligence for wireless networks.
Yeah, mean you could work to address I think we’ll talk about that in you’re the PhD on the signal side so.
Yeah, I want get a little further into this discussion before I answer that question.
Got it, okay.
Okay, great. So let’s let’s move forward.
See where we’re at here.
Oh, yeah. Yeah. So we’ve touched on sovereignty. But Paul, is there anything else that you’d like to say about about this slide?
No, I think we’ve kind of covered it. Okay.
Okay, so so this gets us to something that I’ve I’ve only become aware of somewhat recently. And I think Paul, you’ve had much more exposure to it than I have. But but this concept of a dark data center has become more interesting, I guess, where people are looking at creating environments where there is minimal to zero human presence. I’m not sure if zero is possible, but but they certainly seek to limit it to make everything that’s operating in that data center completely automated and try to lock it down physically as much as possible.
That’s at least my understanding. And Paul, you can correct me if I got that wrong.
No, I mean, examples of this would be custom waveform on five gs for military drones, So I’m helping in that environment.
And it’s getting to a point where some of the technology is allowing for AI to create facial recognition to self identify. And I’d say we’re not quite at a self launch or self attack. There’s still some human in the loop on some of it, luckily. But it’s getting pretty close.
If you look at those models and those kinds of environments, they are lights out even to a point where they they don’t exist on the internet. They live in ecosystems where there’s still signal, right? Good signal. Could be Starlink, could be satellite, could be custom waveform five gs LTE.
So there’s different form factors that it comes in. But but the the general goal still is the same which is this thing this thing is an intelligent part of the ecosystem.
It’s AI driven.
It’s massive compute. It can’t be put on the drone itself, can’t be put on the vehicle itself. I mean Tesla has this has an example as well, You have autopilot driving algorithms. It’s training inferencing and supervised learning updating its algorithms on the back end.
What if you manipulate the algorithm to say that every, you know, pothole is a kid and and or just kind of mess with it. Right? So this is kind of scenarios that kind of keep me up at night sometimes.
So the more you know the more you get paranoid. But the the goal here is to make these things you know as inaccessible as possible.
But the reality in the conversation to have is is does that leave it somewhat complacent to physical attack, wireless attack, RF attack? And the answer probably is if you’re going to have to allow a drone to talk through RF on a good side, you probably just want to monitor that traffic for any negative signals that might come in as well to either interject, hijack or influence.
Yeah. So it’s interesting to hear you talk about the use cases. I wasn’t really familiar with some of them, but the idea that you want to isolate this thing and I guess try to air gap an entire facility is an interesting one. But it does lead to the need for increased automation, and autonomy of systems operating inside of that facility. So that automation is largely facilitated by communication links.
And the vast majority of those, think are wireless, some of them will be wired, but any kind of autonomous platform that’s operating in that facility is going to require a wireless link.
And so I think maybe to some extent, you lock down the WAN type wireless signals to try to air gap things, but you still got the Wild West going on inside of the local area networks. And those wireless protocols, Fi, ZigBee, Bluetooth, whatever they might be, are very vulnerable to attack Bluetooth and ZigBee much more than Wi Fi, but still all of them have different vulnerabilities that could be exploited. So in the push towards increased automation, I think there’s actually not less of a need, but maybe more of a need to ensure that we lock down the wireless domain to make sure that it’s solid, that things are operating as they should be, that we don’t have rogue devices penetrating. And they don’t have to penetrate the physical perimeter, by the way, they just have to get close enough, which doesn’t have to be necessarily super close. It just has to be close enough to close a link.
So to me, I mean this gets into the like this is where you know we get into a little bit of the let me put my tinfoil hat on for a minute kind of situation. But I think it will come or Or maybe you can attribute whether or it’s come, getting close enough might actually be more about controlling the supply chain that creates a physicality. How do I get my how do I what’s the best way to get it close enough to the the AIP in the models?
It it could be as simple as just inter interjecting a a piece of random technology that is part of the supply chain like a light bulb. Yeah. But that light bulb is a very sophisticated light bulb because I built it and that light bulb has a lot more technology in it than is needed to turn on the, you know, and turn on the light waves. I don’t know, you know, maybe it’s a good time to kind of kind of open up a little bit of the storytelling from your perspective, because this is where this is where my curiosity starts to, you know, kind of take the best of it.
Sure. Well, mean, an example, we had a botnet type of worm that was developed for Zigbee that targeted light bulbs. It’s just one example. This is a long time ago, but a light bulb is a simple, small structure, you think it’s just used to create light for humans, but it’s easy enough to integrate a wireless interface with that light bulb one or more.
Mean, this these as electronics have become increasingly sophisticated and miniaturized, it’s really simple to put a modem or many modems on very small form factor things. It’s kind of amazing as I’ve as I joined Bastille, I really was, I feel like I was kind of clueless before about how much these these NICs had propagated through the ecosystem. People have them in their shoes and their coffee mugs and sweatshirts, like weird stuff that I wouldn’t think why would you have that? But there’s some kind of functionality that they want to enable and they want to connect it to your smartphone and whatever.
So it just illustrates the point that everything, so many of the common things that we think of that we just sort of take for granted, have the potential and often do have some sort of wireless interface built into them.
So you asked about stories.
You know, we see a lot of stuff at Bastille, we see a lot of devices that come into facilities that are being monitored, and we kind of understand after a while, okay, well, this device does this, it behaves this way and, and you can you can track devices over time. When we deploy a system, we do it in a way that allows us to over time and in space, capture information about every wireless emission that’s going on. And maybe this is a good time for me to pause and answer the question that came in about frequency ranges. So we do see a large range of frequencies from, you know, sub gigahertz, we typically will monitor generic frequencies from one hundred megahertz up to six gigahertz. And of course, we go up to seven point one two five for Wi Fi to cover the six E band, the six gigahertz band that was introduced in six E and now in seven.
So we’re covering all these frequencies, but primarily we’re looking for protocols or signals that that match the profiles of protocols, including Wi Fi, cellular, Bluetooth, classic Bluetooth, low energy, ZigBee and other IoT protocols. And if we see those signals, we extract metadata from the headers of those signals, we timestamp them, we analyze the metadata, and we we localize all those emissions in space. So we get this multi dimensional set of data that we can operate on and understand what’s happening in the environment.
We also have the ability to look at just generic spectrum. So we’re not maybe having to focus on just cell phone bands based on towers that are in the area, but we might look for anything and everything so that custom waveforms if they’re floating around and on odd frequencies, we can see those as well. So that that’s kind of what we do. And as we deploy these, learn a lot about these different environments. And of course, in different use cases, we come across interesting stories, but there’s also lots of stories in the news. So a couple that come to mind.
One is for a system that we had deployed in a data center for a customer and we’ve done this a number of times, many times actually over the years. And in this one particular case, we were monitoring for a while and noticed this wireless hotspot coming into the data hall periodically.
This hotspot would connect to a client in a server rack, and they would maintain that connection for about an hour at a time. Meanwhile, so they have this data path going back and forth between a server rack and someone’s smartphone that they brought in. And meanwhile, have a cellular backhaul mechanism. So they’re tied to the tower and that can go anywhere into the cloud. So there’s now a data exfil path that is being executed inside of a data hall.
Imagine that happening with the kind of sensitive data that we’ve been talking about, the Paul, Paul, you talked about earlier, just being used to funnel data right outside. And if we don’t have mechanism for bringing visibility to that hotspot, which nobody looks for really, if you have a width type of system integrated into your access points, it’s looking for intrusions into your network. That’s all that’s all it’s paying attention to. It’s not highlighting the fact that a hotspot appeared, it’s it doesn’t care.
So the fact that someone could simply put a little dongle in a server rack, maybe or some other mechanism that allowed them to emulate a client on that rack, and then every time they come in, they just connect and they’ve got an exfil mechanism right away. That’s to me crazy that no one’s paying attention to that.
So we’ve seen other things like unsecured ZigBee modes enabled and industrial strength chillers and data centers, we’ve seen laptops connected to servers, which nobody seemed to notice until the Bastille system identified that a Wi Fi signal was coming from somewhere it shouldn’t have been.
So that visibility allows us to lock things down in a way that I think most most systems and operators really haven’t grasped yet. And that’s where we’re getting some interest from data center, the data center community, I think driven by some of these drivers that you’ve elucidated for us, Paul, we’re seeing massive interest in making sure that they lock down the the multi billion dollar IP that they’ve built up and are hosting, to make sure that it’s secure, it’s worth going the extra mile to make sure that you’ve covered all your bases. The wireless is a big gap that because of the lack of visibility in the overall defense ecosystem, I think it is like the easy button for attackers. It’s becoming the easy button because we haven’t historically been looking at it. So that is the gap I think we need to start plugging.
Yeah, it reminds me of one story that I was tracking pretty well because, at Palo Alto, we acquired a company called Zingbox and we put the Zingbox product into Cortex and we have this IoT kind of analysis. And of course, there’s companies like Armis out there that focus on IoT in a more traditional Ethernet spectrum.
But even in those scenarios, there was a casino that had a thermostat on a fish tank. That was the vector of breach into a data loss on the casino side. Because it was sharing the same network, you know, on one side and had its own network on on another side. Yeah. And that’s in the case where it’s a known device that had a known or unknown vulnerability. You know, I think you guys think about it even more broadly, which is just being aware that devices have communication, simply just knowing what your inventory of things on your facility, buildings, you name it.
And what is communicating, not making assumptions that it’s only the laptops or only the routers or only the servers that have the ability to generate communication.
I think that just the idea of hygiene around knowledge is the first place to start. Again, it’s back in the day when everyone had this debate on whether or not public cloud infrastructure was secure and the question became not about yes or no, it was just find out And you started to create, you know, a market called CSPM, which, you know, Wiz and Orca and Lacework and others, you know, dominated to scan and evaluate configuration. Just to get a knowledge became the first step. And so I don’t know if there’s Yeah. You know, best practices white paper on this. I’m kinda now more focusing on the way you guys see kinda how how to engage the the marketplace. But from my perspective as from a CISO hat, the first thing I wanna know is do I have a blind spot and how do I find out whether or not there’s a threat versus letting it become something that’s known to me through a breach situation.
Yeah, absolutely. Visibility is step one and then it’s analytics and action. So very often we simply don’t have visibility. Let me advance a little bit in the slides. I wanna make sure we get to the end before at the top of the hour. So I think we’ve already touched on a lot of this, whether it’s a dark data center or traditional data center or an AI data center, there’s communications involved.
I think as automation increases, the communications inside of the facility tends to increase. But there’s generally both the WAN and the backhaul types of communication links, which for the most part have been well secured. I mean, there’s still vulnerabilities all over the place. In fact, we didn’t really talk about this, but smartphone, excuse me, smartphone spyware is one area where I get concerned about the ability of nation state actors.
And we’ve talked about their interest in this domain. Nation state actors being able to infiltrate people’s devices. And then you’ve got this very sophisticated smartphone that an attacker can leverage its resources to penetrate various networks. And you may already have authorization to join a network in a data hall, the WiFi network.
And if they have spyware on your device, they can manipulate that interface.
So there are vulnerabilities there, but I would say by and large, the five gs, the LTE, even the Wi Fi, which is on the land side have been relatively locked down. But then we have this prolongation of devices for sensing, for automation that largely leverage shorter range communication protocols like Bluetooth and ZigBee and others like them.
And to be honest with you, the security and those protocols has been a bit of the wild west. It’s also more difficult to update those when implementations get patched.
And so a lot of this stuff just persists in the ecosystem. And that’s one of the things that concerns me the most is we don’t really understand without bringing some visibility to it. We don’t know what’s out there and the insecure nature of some of those protocols and the fact that they’re not as short range as you might think. Bluetooth has been demonstrated to be attacked at over a mile away.
So you think that’s a fairly short range communication protocol but it turns out there are ways to extend that range significantly. And so that kind of thing concerns me, especially when you consider that the physical boundary of a data center is not the secure thing that you thought it was if these electromagnetic waves can just sail right through and from very long distances away.
So we have a bit of a blind spot, as we’ve alluded to, we are improving a lot of things. But I think what is lagging right now is our ability to detect rogue and unauthorized wireless emitters. So that is the physical piece that I think we’re missing. And that even when you think you’re air gapped, if you have devices inside of a facility that have wireless interfaces associated with them, they are vulnerable to attack and your perimeter doesn’t really exist as you know it.
I think I’ve maybe said all of this.
Anyway, we’re we’re moving towards a model where the simplest way for an adversary to attack us is through the wireless interface. And maybe that’s a point where I can just bring up one more story that I think is interesting.
And well, maybe a couple of stories that are interesting. One is there was reported in the news, this wasn’t one that Bastille was directly involved with, but in the news, there was an example of a device that was found in a cabinet connected to a switch that was part of a bank’s ATM network.
And this is a Raspberry Pi. I mean, you mentioned that they’re pretty easy to configure and operate. And this Raspberry Pi had a four gs modem integrated into it.
So some attacker presumably got physical access, plugged this thing in, walked away, sitting in a closet somewhere, and it’s operating on the network. And they the four gs modem was what the attackers use to gain initial access, and to do some flexible configuration and try to try to penetrate the network and get some presence somewhere else where they could open a backdoor and so on.
They did a really good job of living off the land and were able to operate undetected on the wire, but they had a wireless interface that was operating with impunity because no one’s looking. This is the easy thing to do for attackers. And again, as this multi tenancy thing is becoming more of a big deal, as certain factors are pushing adversaries to want this data even more, they’re focusing on whatever the easiest thing is to do.
This isn’t new. People have been doing wireless attacks for a long time but they are becoming more problematic in this particular environment. The last story I want to mention has to do with what was attributed to a Russian APT activity that was targeting a particular customer of an organization and they drew up a report thankfully and shared it with the world. But the basic process of what happened was they were seeking to gain access to this organization’s network over public facing services, were blocked by MFA, at which point they scratched their heads and they figured out what organizations were close geographically to their target and penetrated their networks, navigated through move laterally found multiple Google Home devices and fired up their wireless NICs to connect to the Wi Fi of their primary target. This was simply them compromising opportunistic devices that had wireless interfaces so that they could achieve their objective.
Kind of fascinating, it’s called the nearest neighbor attacks. If you’re interested, look that up, but it sort of dissolves the assumptions of the need for physical proximity as well as the existence of reliable boundaries that can prevent these kinds of attacks from happening.
Okay, maybe I’ve babbled enough but let me move on to one last thing kind of highlighting the importance of all of this.
Bastille, we recently put out a press release announcing a global collaboration between Bastille and Oracle. Oracle is working with us to roll out wireless visibility and analytics to their global footprint of AI data centers. This is a huge deal because Oracle as you know is building these like mad.
There are tons of these data centers, huge amounts of money being spent to build up this infrastructure to house these AI algorithms.
And this is becoming, I think rapidly becoming the new defacto standard of security for AI dentist data centers to include plugging this wireless gap, which is, as I said, both you Paul and I have said, is sort of the easiest way forward in a lot of these environments. So plugging that gap is critical to protect the valuable IP that’s operating there. I think that this is part of the future of AI data center security.
I think this is where things are headed.
Paul, I’ve monopolized the time in the last few minutes.
Is there anything that you’d like to highlight that we haven’t talked about or maybe foot stomp something?
No, not really. Honestly, I think we had a good discussion on the situation. Hopefully that was beneficial for those that and I appreciate people taking time out of their busy days to listen to us.
So yeah, I think we covered everything and really do appreciate everyone’s time today.
Good. Thank you, Paul. I appreciate your time and your insights and everyone for joining. I’m going to turn it back to Justin now to talk about where we’re headed with this series of webinars and then we’ll do some Q and A.
Great. Thank you so much. Fascinating discussion. A number of people sent in questions during the course of the event.
So someone asks it’s kind of a long question, but we’ll get into it. What is the critical infrastructure that needs to be part of the conversation when it comes to these AI data centersAO factories? And how ready are major decision makers to have these conversations? I’ve seen a lot of focus on the output, waitlist on how it all happens, and feels like the market is really reactive and we’ll deal with it but it’s a problem posture.
I hear a lot of, we haven’t thought about it yet. Also, how sustainable do you think hyperscale data center types are for the long run? And from your perspective, is there a place for smaller, more decentralized data centers under one one hundred megawatts? Also, verbiage around AI compute becoming AI factory versus AI data center.
So a lot of questions there.
Choose a couple there before we run out of time.
Yeah, I mean, definitely room for neo clouds and inferencing in different versions. Like we’re in infancy of AI.
AI is a toddler right now walking around in our world.
So I expect it to become in my phone in massive levels of compute with Qualcomm and AMD and others pioneering that.
So I think it’ll find itself into various different types of compute to include one hundred megawatt sites even on training.
So yes, I do think there’s room for that.
Second, I think there was a question around just pick one more part of that question that you had because I forget what other I deal with it when it’s a problem posture.
Yeah, I mean, yes, that’s the reality of cybersecurity, been almost doing this for three decades now.
It it becomes something that I mean, like, let’s let’s do a series on quantum computing. Right? It’s there. The theories, the math starts to show that it’s a threat, and then it becomes something that we start to deal with to some degree as an industry as a reactive mode. What I like about that press release that you just shared is that’s a major indication that people understand the problem. It’s not me shouting from the rooftops, hey, I think there’s a problem here. Organizations are actively investing in solving this through Bastille, and these are organizations that have a lot at stake.
But why are they the exception? It’s not. We need to be looking at it from a holistic threat model all the way down to a tiny little data center and that could become the thing that messes everything up or the little thermostat in the aquarium for a massive casino. Like, it’s not the it’s not an impunity that is focused on a certain industry or a certain type of situation.
It’s it’s borderless, like physically literally borderless. And that’s always something that I I I tend to beat on more than anything is you know you go to certain regions and certain industries and they’re like that’s not not a problem for us right now. It’s like you know hey just to let you know this is the analogy I used to say to banks in in Brazil. It’s like you know your banking system is probably the exact same as Wells Fargo.
They don’t start with you, right? When you wanna go in the boxing ring with Mike Tyson, you don’t you don’t train with Mike Tyson. You train with you know yours you pick on someone else you know. You start somewhere else. So so a lot of times when it when it comes to these threats, you can’t focus on, you know, the end result. You have to focus on the journey that a lot of these adversaries take and you don’t wanna be the sparring partner on their way up to the big boys. So we have to be thinking about it holistically.
Okay, another question. Can you prevent RF signal loss by skip type technologies, RF shielding, by wrapping a data center in RF foil and other physical elements to mitigate the emissions of RF from data centers?
I mean yes, You can you can block the facility from being kind of compromised, surveilled from the outside, but that doesn’t preclude the threat which is is from within. This is like I was on a boat once with the chairman of board of of Vodafone. And the question he gave to me ad hoc was, Paul, Huawei, yes or no? That was literally just spot quest on, like, what was my position on Huawei?
I said, look. You can you can focus a lot on the reality that, you know you know, there could be a threat that Huawei introduces in the supply chain and and, quote, unquote, not buy your five g equipment from that that supplier. But are you stopping the an adversary post deployment from compromising any of your wireless five g infrastructure and doing the same thing? So don’t don’t be complacent about the threat being one direction.
It can come from many directions. So why don’t you create an assumed compromise position that says, look, I I don’t trust Huawei, so I’m going to monitor the crap out of the product. And by the way, if I decide to go after Qualcomm or AMD or Cisco, I’m going to monitor the heck out of the product as well. Same outcome, you want to have that operational visibility.
And so to me, that’s how I think about it.
So let me chime in on that as well. Paul, that was a great answer about not protecting what’s inside.
Theoretically, the answer is yes, you could wrap this entire facility in metal and make it impenetrable, make it a Faraday cage. But that’s theory. In practice, it’s extremely difficult and orders of magnitude more expensive than just paying attention to what’s going on in the wireless domain and which which will actually secure you more because as Paul mentioned, that doesn’t actually protect you from what’s already inside. So the ability to wrap it well is extremely challenging, extremely difficult, it only takes one seam for those waves to start leaking out and penetrating.
And over time, given a complex environment like a data center, over time, those seams that are necessary in order to facilitate various things coming inside and outside will degrade over time. So even though you spend all this money, chances are it will degrade and eventually it’s not as effective as it used to be. Will always be somewhat effective, but it is hugely expensive to do and just not reliable enough, in my opinion, to warrant the kind of cost.
We have other questions we could spend more time on this. Can contact some people individually, such as have there been any actual incidents where RF threats have been leveraged against data centers?
I’m sure we could give many examples, but we’re conscious of people’s time. Someone asked a question about post quantum environment. What is the potential impact on AI data center security technology?
Well, I thought we were going to avoid the question, but yeah, I mean, quantum is like, and I didn’t even know that was a question in the queue. I just was thinking about where the world evolves to. I mean, quantum is a significant component. I think if I look at it from a different point of view, I think it begs the case for why we’re having this conversation today. You cannot assume that five gs is secure because the encryption itself may be not not not secure.
So there are quantum resilient products out there, technologies. I’ve looked at them, I’ve evaluated them.
So I think that’s going to be an important part of how we protect the intended communication channels, but I want to continue to look at it from the unintended communication channels. The ones that are either vulnerable because they’re not in my purview.
Again, if you look at most organizations they think about things from critical systems on out, they don’t think about the HVAC, unsecured Python app that is an internal system. Why should that get compromised?
But it’s a vector of pivoting. So my view is I want to look at the entire spectrum of RF, I want look at everything that’s coming and going.
Maybe you guys have an answer today in terms of how you guys analyze whether or not there’s insecure RFCs and standards in those that would then detect whether or not systems are insecure. But at the end of the day, you’re also just profiling what is there as step one, whether it’s quantum or not. But yes, quantum is going to cause a havoc on SSL, a havoc on encryption, a havoc on a lot of our current underlying assumed trusted security protocols. But there’s hope. There’s companies and products and solutions that are starting to come out that deal with post quantum compute in a completely different point of view.
That’s on a signal side. On a compute side, I don’t know. I’m not close enough to what IBM and Watson and others are doing around cubic processing.
Yeah, I would say from this perspective of wireless, there’s nothing unique there. Anything that quantum can break could break. It could break it on the wireline or the wireless side. So I don’t have any special insight other than we will see as things evolve, which algorithms will break. And as we monitor wireless protocols as well as wireline protocols, we will be able to tell what kind of cryptography they’re using and whether it’s vulnerable to quantum attacks. So I don’t know if that’s helpful, it’s really to me, it’s an important topic, but a little bit tangential.
Thank you both. So first in this series of webinars, the next will be about best practices and ways to ensure that your data and your models inside data centers are adequately protected. Need to know more about Bastille, please visit bastille dot net. Paul, Brett, thank you so very much indeed.
Thank you. Thank you, Justin.
Bye. Take care.