Artificial Intelligence May Change the SOC ForeverPalo Alto CEO on How 'ChatGPT Has Reformed the Way We Interact With Computing'
ChatGPT is "amazing" and "has reformed the way we interact with computing," said Nikesh Arora, chairman and CEO of Palo Alto Networks.
Yes, he said, it can be used to create malware but that malware is blockable because it was created from recursive models. And generative AI can be used to produce phishing attacks at scale, he warned, but we can "fight AI with AI." Arora said the value in generative AI comes from taking what's useful about it and applying that to the SOC.
"The only way security is going to get done right is if you pay attention to data - to what the data is telling you," he said. You can use machine learning to understand patterns, find anomalous behavior and stop it - to "fight bad actors with automation and data analytics and ML," he said.
In this video interview with Information Security Media Group at RSA Conference 2023, Arora also discusses:
- The need for "data heft" to properly train generative AI models;
- The shift from a post-breach-centered SOC to one that is proactive;
- How Palo Alto Networks strives to create products that are best in breed and that also work together.
Prior to Palo Alto Networks, Arora held a number of positions at Google, including senior vice president and chief business officer, and president of global sales operations and business development. Before that, he was the chief marketing officer for the T-Mobile International Division of Deutsche Telekom.
Michael Novinson: Hello, this is Michael Novinson with Information Security Media Group. We're going to be discussing artificial intelligence in the SOC. To explore this further, I am joined by Nikesh Arora. He is chairman and CEO at Palo Alto Networks. Hi Nikesh, how are you?
Nikesh Arora: Good, Michael. How are you?
Novinson: Doing well, thank you. It has been about five months since ChatGPT took over the world's headlines. I want to get a sense from you at a high level, what do you feel are the biggest cyber risks and opportunities around generative AI?
Arora: Well, it's a very specific question. I think, first of all, ChatGPT is amazing. We have all been talking about using AI. When I go to my team internally, like, look at this - ChatGPT. They respond, we've been using supervised machine learning models the last 12 years. We've been using unsupervised models for last seven years. But I think what ChatGPT has done is it has reformed the way we interact with computing. So, if you looked at traditional model, you design a product, you have a bunch of data, we spend a lot of time building UI, and have a lot of product managers. You have UI engineers, who try and anticipate how the end user or customer is going to use your product by building UI. And what ChatGPT has done, it has shown you, why am I creating a new language called UI? And then you are translating that. And you're asking me questions in my UI or in SQL. Just ask me like you would normally. So I think that's kind of power of what ChatGPT has done. It has immense memory. It remembers everything that was ever written about a topic. So it can summarize things much faster for you. So I think this summarization capability, this recursive, regressive statistical model that it has, allows it to feel almost like, you're talking to another person, I think that has a lot of implications, not just in cybersecurity, as we see in almost anything that we're going to do. Many people have called this the iPhone moment. So I think probably it is the iPhone for AI. From a cybersecurity-specific perspective, what's interesting is, I've seen early examples of people trying to use it to create malware. Now, there's good news and bad news. The bad news is, it can do so. The good news is because it's relying on prior models, which are recursive, regressive models, it's kind of making malware similar to what it has, which is good for now. Because it allows us to identify the patterns, we know them before. So we are able to build blocking techniques toward that malware or attack that is building, but it can generate phishing attacks, at scale, if you want. It can generate them on a customized basis. We're going to have to contend with that. We have to fight computing with computing, and we have to fight AI with AI. So I think that's where we are going to have to go from an opportunity perspective. And, it's another wake up call to anyone who's not paying attention to making sure they're secure.
Novinson: So we've seen organizations talking already about having a ChatGPT embedded in their technology. For it to be first to get serious benefit out of it, what's the foundation that companies need to lay? What are those initial steps that need to be taken to get value from generative AI and security tech?
Arora: I think it's important to take the good parts of what ChatGPT offers as a window into how AI can be useful, as opposed to blindly copying the model of ChatGPT into any industry that we're in. I see this huge flurry of activity where people want to quickly integrate open AI into their products. And look, I can talk to my product, but I think be careful, there's kind of two threats. There's been the ChatGPT threat as is clear as security threat, or it's okay to have multiple answers to the problem. Write me a story, write me a song. There's no right answer. There's a good answer, there's a better answer, or there's a bad answer. But it doesn't matter as it depends on your taste. Sometimes I like some songs, you may not like them. So to the extent that you have variability possible in your answers is good. And as you like, I like to call it a sandwich problem. A human prompts it, a human assesses the output. It's great, it's a contained problem. You can converse with it, you can make it smarter, and you can keep asking and prompting better and better questions. But still, you got to be careful, there is a risk of hallucination. Maybe it doesn't know the answer; it just makes it up, statistically. It's like here's maybe what you want to hear. Sometimes nice. You tell me what I want to hear, I like you more. But I think the risk is in the case of what we do, in this case where you need precise answers. And wrong answers are not acceptable. You need a much more precise way to get the answer. I think that's what you need to watch out for. Just blindly putting ChatGPT into every product and calling it a co-pilot is dangerous. You have to think about what data are you using to train your system on? What are the answers going to look like? How do you avoid false positives? How do you avoid the notion of hallucinations? There's a lot of work that needs to be done. But, it looks very promising.
Novinson: So what are they? What do you feel are those foundational steps or those building blocks that companies should put in place to make sure that it's not generating false positives?
Arora: Michael, I'm going to sound like we told you so. But I'm going to say it anyway. You and I have talked about this, and we talk about, the only way security is going to get done right is, you pay attention to data. If you pay attention to what the data is telling you, you're going to have to use computing, you're going to have machine learning to understand the patterns, you will have to look at anomalous behavior, stop anomalous behavior, use AI - generative or otherwise - and figure out how to fight bad actors with automation and data analytics and machine learning. So that's the opportunity, that's what we're going to have to do.
Novinson: So speaking of artificial intelligence and I know that's been a point of emphasis for your Palo Alto Networks, how do you see AI changing the way that the SOC works?
Arora: Well, think about it. Let's just break it down into two parts. One, I think what AI does is, if you don't have heft in the industry, if you don't have a lot of data you're processing, you can't train new models. It's very hard to say, I'm starting a company and I'm going to train my model on my customers. Well, you're going to have customers to start there. Now we're blessed. We have 62,000 customers who use our firewalls. We have 1,000s of customers in our cloud security business and our SOC business, as does our SASE does. And so that's a good starting point. Now, again, it's ours to ruin. We have to make sure that we use that data. We have to apply that intelligently. We have been working for the last four years. And I've said this before, we launched a product four months ago before ChatGPT came about called XSIAM. And the whole premise of that was that we thought all data strategies being deployed in the SOC were legacy strategies where you ingested all the data you could find, didn't quite normalize it. And sometimes, AI has this problem, the garbage in garbage out. And I think the risk that we've seen in the past is that we rush to apply AI, but the data foundations weren't strong enough. So we've built some good data foundation in our XSIAM product. We have looked hard at what incremental opportunities ChatGPT or generative AI brings forward. But all it has done is it has re-energized. Just continue to focus on getting good data, building great models, training our data, and effectively using automation. And every human interaction is a training opportunity. Here's my hypothesis. Here's I think how security should be done - dear customer, dear partner, dear user, their employee, interact with it, tell me if it's the right answer or not. So I think what's going to change for us internally is, we're going to keep doubling down on AI and good data. But, we're also going to use every human interaction as a training event.
Novinson: Now, it's been a little over seven months since you did introduce XSIAM. Want to get a sense from you of who's using it the most right now, what's the profile of customers, and how are they using it?
Arora: Well, like the traditional approach to SOC has been a post-breach approach, unfortunately. You have a problem. You say you have all this data, let me go query the data, figure out what happened. So happens after you had a breach. And of course, you're monitoring it to make sure you can fix hygiene and security issues, but for the most part, the SOC has been employed as a tool to figure out well how the breach happened, what happened, how do you remediate it, how do you go spin up all the backups to go bring things back, most of it cyber-resilience product. We think a SOC should be a proactive product. Product where you can go remediate security issues much sooner. So at Palo Alto Networks, we went from days of mean time to respond our SOC down to under a minute. It took a lot of work. It took us four years, but we've basically packaged that technology in XSIAM. And what we're noticing is we've done it very carefully. We've exposed it to customers who already have a lot of Palo Alto Networks in their infrastructure and other vendors - we support every vendor out there. But we do require you to have our XDR product because we think having a single source of truth as data is important. And we showed it to 10 customers who became design partners, all of them have become paying customers in a very short period of time. We continue to see interest, but we're doing it very carefully. We're exposing it to more and more customers who believe are aligned with our product road map. And we use that as feedback. We use that as ability to get our stuff get better and better. So look, I think it's a very promising category. I see a lot of enthusiasm around it. I see customers are tired of legacy SOC solutions, which have relied on data ingestion for 15 years. And it's time for that part of the industry to have an inflection point. I think this is it.
Novinson: Want to ask you here finally, I know for as long as I've been in this industry, people have been talking about consolidation. Wanted to get a sense of, over the past 12 months as the economic downturn took hold, how have those conversations around vendor consolidation and reducing the vendor footprint changed?
Arora: Like five years ago when I joined the industry, I was told by the industry leaders and participants, cybersecurity is not going to consolidate because people want best-of-breed solutions. And they're not going to buy just because it works together. And I said, well what if it works together and it's also best to breed, and you can buy it individually or together. So I'm hoping we've proven to the market that slowly and steadily if you solve customers' real problems, if you solve it with a great product, and if your products work together and show you the benefit of there are together, then it'll lead to customers buying more things from you. Notice I didn't use the word consolidation, for reasons like we want to build best-of-breed products in all categories, we want our products to work better. We want our products to work together for our customers, encourage them to get more of our products. And, we're seeing that happen. I have to be careful, we're in the midst of our quiet period. So well, I'll say what I said, the prior quarter, which is, in the current economic climate, customers have less budgets to go out and try new things. They want trusted names, they want people who can deliver, they want value, and they want ROI. And in that case, we are a trusted name. We have proven to the market that we bring best-of-breed capability to the market and our stuff works together. So that's what drove behavior from our customers in Q1 and Q2.
Novinson: Definitely will be interesting to watch going forward. Nikesh, thank you so much here for the time.
Arora: Thank you for having me, Michael.
Novinson: Of course! We've been speaking with Nikesh Arora. He is chairman and CEO at Palo Alto Networks. For Information Security Media Group, this is Michael Novinson. Have a nice day.