Cloud Security , Security Operations , Video
ISMG Editors: Emerging AI Tech for Cloud Security in 2024
Payments Expert Troy Leach Joins the Panel to Cover AI, Zero Trust and IoT Security Anna Delaney (annamadeline) • January 26, 2024In the latest weekly update, Troy Leach, CSO at Cloud Security Alliance, joined three editors at Information Security Media Group to discuss important cybersecurity issues, including how generative AI is enhancing multi-cloud security, AI's influence on authentication processes, and the state of zero trust and IoT security.
See Also: Securing the Cloud, One Identity at a Time
The panelists - Anna Delaney, director, productions; Mathew Schwartz, executive editor, DataBreachToday and Europe; Troy Leach, chief strategy officer, Cloud Security Alliance; and Tom Field, senior vice president, editorial - discussed:
- Real-life use cases of organizations that applied AI to their cloud security practices;
- How AI could influence established authorization and authentication processes and what potential changes organizations should anticipate;
- The current landscape of zero trust adoption and IoT security vulnerabilities.
The ISMG Editors' Panel runs weekly. Don't miss our previous installments, including the Jan. 12 edition on whether we will ever get a handle on API security and the Jan. 19 edition on why crypto phishing attacks are surging.
Transcript
This transcript has been edited and refined for clarity.
Anna Delaney: Hello and welcome to the ISMG Editors' Panel. I'm Anna Delaney. This week, we're looking at generative AI and how it's reshaping security across multi-cloud environments, along with a reality check on the current state of zero trust and IoT security. Guiding us through these critical issues is Troy Leach, chief strategy officer at Cloud Security Alliance. Excellent to have you back Troy.
Troy Leach: Thank you Anna. Great to be back.
Delaney: Also joining me are ISMG superstars - Tom Field, senior vice president of editorial, and Mathew Schwartz, executive editor of DataBreachToday in Europe. Great to see you all. Troy, we have got some questions for you. I'm going to hand it over to Tom to start us off.
Field: Troy, as we start this year, what are some of the real-life use cases you're seeing of organizations applying gen AI to their cloud security practices? I've had this discussion with some CISOs and security leaders, and I hear a lot about can do, going to do, would like to do. I don't hear a lot about what they're doing, so I turn to you.
Leach: I'm hearing from some of the frontier models that are out there working or piloting some of their work. And some of this AI has been around for five plus years, and I'll give a couple examples of that. But, a lot is happening, and the reality is that cloud security is now cybersecurity. Everyone has migrated to the cloud. At some levels, more than half of several organizations' critical business assets are now in the cloud. But, the things that I'm seeing in AI that are already out there and being implemented are some user behavior analysis tools. These are being implemented to some GPTs and to detect anomalies in some of the employees' behaviors. In one example that I was told, if someone's asking for access to a particular file, it'd be out of character for them to do so or out of the time zone that they normally would be in, this AI will go out and send a Slack message to them and have a conversation just as ChatGPT would do and try to understand and comprehend the legitimacy of that. But then it will also provide immediate training if it was something they should not have been asking for. That's one thing, and I'm already seeing that being used for training in organizations and in the financial service industry. We're starting to see one example that's been given publicly - Discover Financial - with all of their call centers having more personalized training so as they listen into calls, the AI is no longer just that script of going through a series of what could the problem be. AI is listening to the problem and getting to the issue much more quickly. That's what we're also seeing in some of the code. We see Anthropic, Google DeepMind and others that have the ability to do some of this secure code analysis. As the code is being downloaded and received by the company, sometimes being able to reverse engineer, find any vulnerabilities that might even not be known yet; they may not have an existing CVE - a common vulnerability number to them - but what they'll do is sometimes go and correct the code if they have the authorization to do so. With all of that, we're starting to see that one of the security metrics is MTTR - the median time to remediation - and I think that already we're starting to see significant changes in how we are going to use that metric because AI is making it so much more convenient. I'll add one other thing just because of my background with PCI and doing a lot of regulatory frameworks, including at CSA, with our Cloud Controls Matrix, which is one of the frontier models; they were mentioning that they're discussing with their customers, asking for compliance and showing the demonstration of security within the cloud environments, and they are going through and having AI complete all these forms and then do a manual check for all the validation, with about 90% or more accuracy. There's real excitement around the minimization of all the documents that people have to fill out in the security industry, helping them have good security professionals focus on hard security problems rather than filling out multiple forms.
Field: Given all that Troy, where do you see the greatest opportunities to apply gen AI to bolster these security efforts across multi-cloud environments today?
Leach: Yeah, it's a great question, and it's going to evolve over time. That's why at CSA, we have four research groups working on AI for multi-cloud environments, and this is for AI - both gen AI and discriminative AI. They offer assurances that the intentions in one environment are going to be able to replicate without air into another cloud architecture. The biggest problem we have today - and I hear this all the time - especially in the financial service industry, where they have the upcoming requirements of DORA, they have other types of expectations of regulation that will say, for your cloud service providers, we want to have better resiliency. We're working on the U.S. Treasury. They came out with a report last year - same thing, we want to see more multi-cloud architectures and that type of resiliency in such a critical infrastructure as financial services. With that, they're having to double their staff because the architectures are not the same. Azure is not like AWS, which is not like IBM, so there's going to be a need to have additional staff that are trained on all of the intricacies of each type of architecture. What we're seeing with AI is it's going to support good practices once it understands and is fine-tuned and trained to do what the intent is, and it's going to be able to assure that what was the intention in one environment is going to stay in another cloud environment. That's exciting for me, and in general, the security we're going to see with general cloud architecture is going to be very similar to how we did with AI. What we saw with I-as-a–service - that's infrastructure-as-a–service - is probably going to be your public and private large language models, how you manage those different types of shared responsibilities, and the APIs that work with GPTs and gen AI as a software service. I think they'll be very similar to the controls that we eventually created for PaaS and SaaS. I think this is going to be just security that matures over a period of time, and we're going to see different but similar security strategies that include a shared security responsibility model. That's the biggest question I hear, whether it's in Congress or elsewhere, is "Who has that liability and responsibility? Is it the creators of the model? The large language model? Is it those that create APIs to engage and use the model? Is it the enterprise and how they insert data and create their datasets?" I think it's going to be a lot of good questions, but I think we're on a good path - a good roadmap I should say - with how we conducted security with cloud over the last 10 to 15 years.
Field: Excellent to hear you so bullish on the topic Troy. Thank you. I'm turning you over now to Anna.
Delaney: Thank you so much. I'd like to turn to authorization and authentication. With the ongoing advancements in AI, how do you foresee its influence on well-established practices of authorization and authentication in organizational security, and what are the potential changes that organizations should anticipate and prepare for, and how do they get there?
Leach: The biggest influence what I'm seeing with authorization or authentication in general is how easy it is now to spoof biodata. We talked about that in the August session, about being able to capture someone's video likeness, capture their voice and look very authentic. We've seen some red team exercises where CFOs have been mimicked on a Zoom call with their video, with their voice, with their inflections and be able to ask for wire transfers. That's going to be something that we see quite a bit more of. We're going to see a large spike in attacks and the ability to manufacture these types of false images. The reason for that is we've put a lot of faith in our ability to have these as gatekeepers of this - you have a unique voice, you have a neat, unique fingerprint - and AI is going to be challenging a little bit. Also, the amount of successful phishing attacks is going to skyrocket with this, because malicious GPTs such as Wolf GPT, WormGPT, FraudGPT and several others are lowering the bar to entry to minimize how easy it is to create a phishing attack. Those things that we used as parameters in our authentication, or even just the human element of having poor grammar and misspellings, it has blatantly bad domains that it's using. It's going to be very difficult for us to do that, so organizations are going to have to combat that because it's a significant volume of new phishing attacks.
Delaney: From your perspective, are there any specific AI-driven technologies or techniques that are proving particularly effective in enhancing authorization and authentication practices?
Leach: The best defense that I'm hearing is AI defending against AI. But we're going to need to use AI more quickly to evaluate the source code at end-user locations. I mentioned reverse engineering and a faster way to react to some of the easier traps. I think we're going to have to rely on techniques that are not using antiquated methods of signature-based known CVEs. The development pace of malicious software is just moving too fast. I think another technique, especially when we look at the GPT risk, such as evasion, extraction and poisoning - these are some of the main concerns that you've seen in the frameworks that are being built, whether it's at CSA or MITRE or DARPA. They are consistently the areas that are the biggest concern of threat. It is crucial to develop security policies within APIs that evaluate the output and see if there's anything questionable and rerun it back through the prompt before delivering it to the end user. For example, let's say you had an output that generated PII and you're concerned about GDPR, or you have executable code that's noticed and shouldn't be released back or it was supposed to produce executable code and it doesn't. These would be prompts that some of these APIs can detect, recognize and put back through the AI before having the output being received by an end user. It'll be a little bit like Minority Report, where the AI acts like a precog and you find this vulnerability before there's a problem to exploit it.
Delaney: Love a bit of Tom Cruise. That's great. Thank you Troy. That has been informative. I'm passing on the baton to Mat.
Schwartz: So is it going to be Mission Impossible do you think for making zero trust better? I don't know if AI factors into this discussion, because I forget how many buzzwords it was before AI, but zero trust has definitely been and continues to be a real target for organizations. They're attempting to get to a place where they can apply zero trust principles. Is AI going to help with that? Are there other things that are going to help with that? Where are we at?
Leach: Zero trust has the inverse problem of AI. AI is very complex to understand, intricacies in how large language models truly operate and how you train it, but it's easy to implement. Zero trust is, however, a very easy concept but one that is uniquely challenging to consistently implement. What we've seen now is that it's been three years - almost in May it will be three years - since the executive order of 2021, where improving the nation's cybersecurity and zero trust was shown a light on and emphasized within that executive order. So at least here in the U.S., and abroad as well, we're doing a zero trust meeting in Switzerland in April for CSA. There is buy-in and awareness at the executive level. That's a pretty dramatic step in the last three years considering John Kindervag and colleagues have been coining this term for more than a decade before that. The term AI - Mat you touched on this topic - is it's widely overused with the term zero trust and abused in marketing campaigns to the point that you have these CISOs completely shutting down if the words even utter in front of them. I think, the key is education and reminding what the purpose of this is, how zero trust is to start small and identify the most critical or highest risk business asset. What I'm encouraging here from a business case perspective is that zero trust is able to demonstrate operational efficiency, something I've preached for many years. It's not just a security metric but also a business financial metric to demonstrate that if you have truly focused security, it can streamline business process and improve the overall health of the organization. We're seeing that quite a bit. At CSA, we launched the Zero Trust Advancement Center last year with a surprising number of people engaged in and doing training. We're seeing the most interest in this training, and why people need this education is because they understand the concepts but it's hard to grasp how you take the type of cloud access controls, monitor for continuous authorization, establish the right security policies for each type of cloud service provider, establish good public cloud architecture and then understand how to segment that, which is a big part of zero trust. I think all these things are easy on their own, but to apply a zero trust philosophy, it's a lot more difficult. I'm encouraged to see that it's at least at the forefront of people trying to educate themselves and understanding how they go about applying zero trust. I think we're better than what we were three years ago but a long road ahead of us.
Schwartz: Of course it's never static, and the requirement to stay educated and ahead of things keeps changing. This brings me to another area I wanted to ask you about, because I know that you've been keeping a close eye conceptually on the internet of things, and we need to keep talking about this, because we continue to see such an explosion in internet-connected devices of all different stripes. For example, automotives. We're seeing increasingly connected cars, which I think for anyone who has been in the business as long as we have might stoke some fear. I wanted to ask how CSA's efforts are touching on IoT? Do you see this deceptively maybe complicated area getting the focus that it needs to be getting from a manufacturing standpoint? You were just touching again on education.
Leach: It's been I don't know how many years since Charlie Miller took a couple of vehicles and showed how to remotely control them and control the cam of the vehicle. I think it's something that we do need to take seriously and there is growth. You mentioned there's major adoption happening in IoT all around the world. But today, I'd say the vast majority of the 16 billion plus IoT devices are in North America and Europe. But with the accessibility of cloud services and the growth of smart devices, we're starting to see more adoption in India, Japan and other parts of APAC especially. Tom earlier said I was bullish on AI. There are a lot of people that are bullish on IoT saying they're expecting 30% year-over-year revenue growth for the next 10 years. We'll probably see about that. I also predicted that I would keep my New Year's resolution, and that's long past. So it's to be determined, but I know at Cloud Security Alliance, we developed an IoT framework several years ago, currently on version 3, and it's picking up interest in smart transportation, the automotive industry in particular, and we're working with ENX Association. It's an association of European vehicle manufacturers and likely be working with U.S.-based manufacturers to have their recognized certifications. They have TISAX, have that recognized by our star registry for IoT and for the safe framework in applicability to IoT, but also in the cloud. And all of this is going to be determined in the project that kicks off this month and should run for about three months. So, we are a nonprofit organization with volunteers. Anyone and everyone is welcome to participate in developing that next framework of how cars are going to be safe for the next several generations.
Schwartz: Looking forward to that safety in our automotive industry. Thank you for your efforts, and we'll have to check back on those. I'd like to hand over now to Anna if I may.
Delaney: Excellent! Brilliant stuff. Finally, and just for fun, if you could choose an AI system to direct a remake of a classic movie, which movie would it be and how might the AI bring a fresh perspective to the story? Troy, do you want to go first?
Leach: There's a lot of good choices out there. I should have ran this through ChatGPT and came up with a more creative answer, but I would go with the nuance of the film - 2001: A Space Odyssey. Because there was a lot of sci-fi in the 1950s and 1960s about the future of artificial intelligence as the main antagonist. This is 35 to 40 years ago of what AI would look like by 2001 and now here we are in 2024 and I'd be interesting to get AI's take on itself and how it rewrites the script of how good AI could be and see if it could take a nice little spin on what AI will look like in the future.
Delaney: Nice. We like that one. Tom?
Field: Slightly less cerebral. I'm going back to one of my favorites - Young Frankenstein. imagine this. Imagine if our friend, the monster here, instead of being given the brain of Abby Normal, was given artificial intelligence. And what a different film this might be.
Delaney: That is creative and I love it. Mat?
Schwartz: I'm looking forward to the AI hallucinations coming out in that one. So this isn't a particular film, even though it's been filmed multiple times - Beowulf. Beowulf is the hero but there's a very famous novel from 1971 by American author John Gardner, which flipped it and looked at Grendel as the hero. If we brought this methodology to bear on some AI films where it is the obvious villain, I was thinking like The Matrix. Who is the matrix? What does it want? What are its hopes and dreams? Terminator? Is it just about shiny, chrome killing machines? Or is it secretly into puppies? So I think if we could just flip some of those things, we could have an interesting re-evaluation of these movie villain tropes.
Delaney: Very good. I've gone from villain to the Wizard of Oz. Have you heard of the film Bandersnatch?
Field: No.
Schwartz: Yeah. Netflix - choose your own adventure.
Delaney: Exactly! So Charlie Brooker allows you to shape the story. It's blurring the lines between a game and a story. I was thinking it could be interesting to do something similar to the classic - The Wizard of Oz - using deep fake technology. You can make decisions at certain points, like choosing the challenges on the yellow brick road or what happens to the ruby slippers, maybe they have a different fate. Fun take on a classics.
Leach: I like that one a lot. It reminds me of going back to school days and those books that you would jump around.
Field: Choose your own adventure.
Leach: If you choose this, you choose your own adventure. I love it.
Delaney: Maybe we will be choosing our own adventures in the future with AI. But thank you so much for joining us on this adventure Troy. You've been brilliant.
Leach: I appreciate the invite. Thank you!
Delaney: Thank you so much for watching. Until next time.