WEBVTT 1 00:00:00.240 --> 00:00:02.010 Michael Novinson: Hello, this is Michael Novinson with 2 00:00:02.040 --> 00:00:05.070 Information Security Media Group. I'm joined today by 3 00:00:05.070 --> 00:00:09.450 Nicole Eagan. She is the chief strategy and AI officer at 4 00:00:09.450 --> 00:00:12.090 Darktrace. Good afternoon, Nicole. How are you? 5 00:00:12.720 --> 00:00:14.640 Nicole Eagan: I'm doing great. Thanks so much for having me. 6 00:00:14.000 --> 00:00:16.393 Michael Novinson: Thank you for making the time. I wanted to 7 00:00:16.450 --> 00:00:20.039 start off by talking about the first acquisition in Darktrace's 8 00:00:20.096 --> 00:00:23.173 history, you acquired Cybersprint back in February for 9 00:00:23.230 --> 00:00:26.535 54.7 million. What does the integration process around the 10 00:00:26.592 --> 00:00:30.011 Cybersprint acquisition looks like? What does that allow you 11 00:00:30.000 --> 00:00:32.837 Nicole Eagan: I think what was really interesting is if you 12 00:00:30.068 --> 00:00:30.410 to do? 13 00:00:32.900 --> 00:00:36.305 look at Darktrace and our mission and evolution, we've 14 00:00:36.368 --> 00:00:40.277 always been focused on applying artificial intelligence to the 15 00:00:40.340 --> 00:00:44.250 existential threat of cyber. We did that in a very unique way, 16 00:00:44.313 --> 00:00:47.780 we created our own proprietary machine learning kind of 17 00:00:47.844 --> 00:00:51.690 approach, we help, we call it self learning AI. And what that 18 00:00:51.753 --> 00:00:55.095 did was actually learn about an organization from the 19 00:00:55.158 --> 00:00:58.878 inside-out. And that really helped because it meant that it 20 00:00:58.941 --> 00:01:02.787 could detect novel threats, it can detect insider threats and 21 00:01:02.850 --> 00:01:06.823 things that were going missed by other approaches. So then talk 22 00:01:06.886 --> 00:01:10.480 about Cybersprint. And where this fits in. We were always 23 00:01:10.543 --> 00:01:14.641 focused on this inside-out data, kind of the internal threats and 24 00:01:14.704 --> 00:01:18.235 attack surface. And what we found was really interesting 25 00:01:18.298 --> 00:01:21.766 about Cybersprint is they were looking at everything we 26 00:01:21.829 --> 00:01:25.423 weren't, the external attack service. And so really being 27 00:01:25.486 --> 00:01:29.080 able to combine the internal with the external, but while 28 00:01:29.143 --> 00:01:32.674 using this very bespoke and unique AI approach we had is 29 00:01:32.737 --> 00:01:36.205 really was all about, we couldn't be more thrilled with 30 00:01:36.268 --> 00:01:40.051 the integration efforts. In fact, we've already combined the 31 00:01:40.115 --> 00:01:43.835 Cybersprint acquisition into a research project that we had 32 00:01:43.898 --> 00:01:47.618 been working on for a couple of years at our AI labs called 33 00:01:47.681 --> 00:01:51.275 Darktrace Prevent. And this really shifts everything, you 34 00:01:51.338 --> 00:01:55.058 know. You probably will know, Michael, that everyone's been 35 00:01:55.121 --> 00:01:58.400 kind of fighting and firefighting in the cyberspace, 36 00:01:58.463 --> 00:02:02.435 and it's been a very reactionary - you get attacked and you try 37 00:02:02.498 --> 00:02:06.345 to spot it and stop it as fast as you can. And we really felt 38 00:02:06.408 --> 00:02:10.065 that this whole industry is shifting quite radically to be 39 00:02:10.128 --> 00:02:13.722 more about true cyber risk management, and moving to what 40 00:02:13.785 --> 00:02:17.631 people often call the left of the attack. So what if we could 41 00:02:17.694 --> 00:02:21.667 actually use AI to predict and prevent attacks, and Cybersprint 42 00:02:21.730 --> 00:02:24.000 is a big part of that for Darktrace. 43 00:02:25.560 --> 00:02:27.090 Michael Novinson: What's different about trying to apply 44 00:02:27.120 --> 00:02:30.150 AI to the prevention phase, as opposed to the detection and 45 00:02:30.150 --> 00:02:33.180 response phases, where Darktrace has played historically? 46 00:02:34.110 --> 00:02:36.150 Nicole Eagan: It's a great question, because when you look 47 00:02:36.150 --> 00:02:43.350 at applying AI to detection, for example, your AI is actually 48 00:02:43.350 --> 00:02:47.460 analyzing existing data features and being able to move as 49 00:02:47.460 --> 00:02:50.790 quickly as it can, and kind of augment those human teams with 50 00:02:50.790 --> 00:02:54.330 machine speed response. But when you actually start to look at 51 00:02:54.510 --> 00:03:00.060 Prevent, it's a bit about asking the AI to look into things that 52 00:03:00.060 --> 00:03:03.990 haven't happened yet. And that's a different data set entirely. 53 00:03:04.170 --> 00:03:08.130 So I think what's interesting with the whole Prevent space is 54 00:03:08.160 --> 00:03:12.630 you're actually doing things like looking at, you know, if 55 00:03:12.630 --> 00:03:16.620 the AI were to pretend it was an attacker, first, how would it 56 00:03:16.650 --> 00:03:20.220 look at you from the outside world? And what can it see? And 57 00:03:20.220 --> 00:03:24.300 how might it leverage that visibility to spin into the 58 00:03:24.330 --> 00:03:28.950 inside and do a very targeted attack, and then you get into 59 00:03:28.950 --> 00:03:33.750 new areas of applying AI. Basically, we had to work for 60 00:03:33.750 --> 00:03:37.380 quite a long time to figure out how to get an AI algorithm to 61 00:03:37.380 --> 00:03:43.260 understand the crown jewels or key assets of any given 62 00:03:43.260 --> 00:03:47.130 organization bespoken specific to that organization. So, for 63 00:03:47.130 --> 00:03:50.940 example, who are the most vulnerable people combined with 64 00:03:50.940 --> 00:03:54.720 the most vulnerable targets? And most importantly, do you have 65 00:03:54.720 --> 00:03:58.710 countermeasures in place? So can you actually have AI emulate the 66 00:03:58.710 --> 00:04:02.820 attacks and say, "You know what, this attack, even if somebody 67 00:04:02.820 --> 00:04:05.250 would have found this vulnerability, you already have 68 00:04:05.250 --> 00:04:08.760 three layers of countermeasures in place. You're okay, there." 69 00:04:08.850 --> 00:04:12.630 But over here, if we look at what happens in this attack path 70 00:04:12.630 --> 00:04:18.240 simulation, you did not have anything in place. And so what 71 00:04:18.240 --> 00:04:20.790 can we do to actually harden your systems? And that gets 72 00:04:20.790 --> 00:04:25.050 really interesting because now it means you can have the kind 73 00:04:25.050 --> 00:04:30.300 of insights that you get from the Prevent into your existing 74 00:04:30.300 --> 00:04:34.410 detect and response, hardening the whole environment. And I 75 00:04:34.410 --> 00:04:37.560 think that's a game changer from from what we can say. 76 00:04:39.120 --> 00:04:41.112 Michael Novinson: What's different from the customer 77 00:04:41.171 --> 00:04:44.511 standpoint. What's different about the view they get from 78 00:04:44.569 --> 00:04:47.851 external attack surface management versus the inside-out 79 00:04:47.910 --> 00:04:50.430 view that are traces provided historically? 80 00:04:50.000 --> 00:04:53.960 Nicole Eagan: I think, but when I speak a lot to Darktrace 81 00:04:53.960 --> 00:04:57.200 customers about this, and I've been able to actually see it 82 00:04:57.200 --> 00:05:00.140 live running in a number of their environments. I think it 83 00:05:00.140 --> 00:05:03.590 comes down to the use cases. What are the use cases now that 84 00:05:03.590 --> 00:05:06.890 you can do so in the other way in detecting respond, it was 85 00:05:06.890 --> 00:05:10.160 usually talking about what kind of threats are responding - 86 00:05:10.190 --> 00:05:13.100 insider threats, we're seeing, you know, zero days novel 87 00:05:13.100 --> 00:05:16.790 attacks. And I find when you're talking about tax service 88 00:05:16.790 --> 00:05:19.550 management, it's much more use-case driven. So for 89 00:05:19.550 --> 00:05:24.320 instance, third-party risk and supply chain, there's a great 90 00:05:24.320 --> 00:05:28.730 new capability that we've added into Darktrace attack surface 91 00:05:28.730 --> 00:05:32.600 management called Newsroom. So what happens when you actually 92 00:05:32.600 --> 00:05:37.730 see breaking news stories about threats? And the first thing, 93 00:05:37.760 --> 00:05:42.080 actually, everyone seems to ask is, "How bad is it? Does it 94 00:05:42.080 --> 00:05:45.530 apply to me?" so being able to actually go to the attack 95 00:05:45.530 --> 00:05:49.280 service management console, see these real-time news feeds, and 96 00:05:49.280 --> 00:05:52.850 then actually see if it's green, in other words, you're okay, 97 00:05:52.880 --> 00:05:57.260 this isn't in your environment, or it's red, and you've got 55 98 00:05:57.260 --> 00:06:00.350 instances of it. And let's pinpoint those for you 99 00:06:00.380 --> 00:06:04.100 immediately. So I think those those use cases, we also see a 100 00:06:04.100 --> 00:06:10.310 lot of shadow IT use cases. And another one that is quite 101 00:06:10.310 --> 00:06:14.960 interesting is actually cloud environments, right? Somebody 102 00:06:14.960 --> 00:06:17.330 who spins up a cloud environment, and maybe they were 103 00:06:17.330 --> 00:06:21.530 just testing out some new systems, or maybe it was a 104 00:06:21.530 --> 00:06:24.260 project that then spun down and they forgot to close down the 105 00:06:24.260 --> 00:06:28.220 cloud, we see a lot of that type of cloud instances, one use case 106 00:06:28.220 --> 00:06:32.150 I never had thought of, that was an eye opener for me being my 107 00:06:32.150 --> 00:06:37.400 passion in AI was actually being able to see when one of your AI 108 00:06:37.400 --> 00:06:42.260 systems is training off of third-party data. You know, I 109 00:06:42.470 --> 00:06:44.600 don't think a lot of people think, "Oh! That's something 110 00:06:44.600 --> 00:06:47.990 that's visible from the internet, and actually can be 111 00:06:47.990 --> 00:06:51.620 used as a brand new, you know, attack surface." So I think that 112 00:06:51.620 --> 00:06:55.910 breadth of of an almost endless feeling, use cases that you can 113 00:06:55.910 --> 00:06:58.580 get with attack surface management. But again, the most 114 00:06:58.580 --> 00:07:00.590 important thing, I think, is being able to feed that 115 00:07:00.590 --> 00:07:05.030 knowledge into your internal security systems to harden in an 116 00:07:05.030 --> 00:07:08.780 environment and really, kind of in the background, just keep 117 00:07:08.780 --> 00:07:12.680 that whole ecosystem working in its optimal state. 118 00:07:14.340 --> 00:07:16.590 Michael Novinson: In terms of Darktrace Protectors, it's a 119 00:07:16.590 --> 00:07:19.530 relatively new offering for you. Is it the same profile of 120 00:07:19.530 --> 00:07:22.740 customer using protectors is using detect and respond and 121 00:07:22.740 --> 00:07:25.620 heal? Are you seeing some different customer types or 122 00:07:25.620 --> 00:07:30.090 different customer profiles? Using the Protect feature? 123 00:07:30.089 --> 00:07:32.636 Nicole Eagan: Yep. So it's Prevent and one of the things 124 00:07:32.696 --> 00:07:36.396 you see in it is that what ends up happening is this a lot of 125 00:07:36.457 --> 00:07:39.914 our existing customers who already kind of are passionate 126 00:07:39.974 --> 00:07:43.734 about what we're able to do with Darktrace Detect and Respond. 127 00:07:43.795 --> 00:07:47.313 But now they are just, they're shifting their attention to 128 00:07:47.373 --> 00:07:51.194 being more proactive to be more preventative. But also we start 129 00:07:51.255 --> 00:07:54.954 to see some new people, even from the existing customers come 130 00:07:55.015 --> 00:07:58.290 into the equation. So for instance, we're seeing chief 131 00:07:58.351 --> 00:08:02.171 risk officers join more of the meetings together with the chief 132 00:08:02.232 --> 00:08:05.750 information security officer. We're seeing even compliance 133 00:08:05.810 --> 00:08:09.085 angles where people are interested with, you know, how 134 00:08:09.146 --> 00:08:12.360 does this maybe help me make sure that I'm staying in 135 00:08:12.421 --> 00:08:16.242 compliance, and that people in the outside world can't see some 136 00:08:16.303 --> 00:08:19.760 of my sensitive data that might be protected by different 137 00:08:19.820 --> 00:08:23.580 compliance acts and things like GDPR? I think what we also are 138 00:08:23.641 --> 00:08:27.462 seeing is if a company has red team, red team in place, whether 139 00:08:27.522 --> 00:08:31.161 that be internal red teaming, whether it be a combination of 140 00:08:31.222 --> 00:08:34.982 internal and external, whether it be actually even third-party 141 00:08:35.043 --> 00:08:38.560 pen testers, we're starting to see the people who have not 142 00:08:38.621 --> 00:08:42.260 versus when I think about Detect and Respond, we're normally 143 00:08:42.321 --> 00:08:45.960 talking to the blue team, right? So in larger organizations, 144 00:08:44.940 --> 00:09:18.000 Michael Novinson: Very interesting. Want to talk a 145 00:08:46.020 --> 00:08:49.477 where they have these teams split up, you'll start to see 146 00:08:49.538 --> 00:08:53.480 some of that pen testing and red teaming type of activity kind of 147 00:08:53.541 --> 00:08:57.119 filter in. But we also are seeing new customers who want to 148 00:08:57.180 --> 00:09:00.697 start with Prevent, who kind of say, "Well, isn't this the 149 00:09:00.758 --> 00:09:04.457 logical place to start? Let me start by understanding where I 150 00:09:04.518 --> 00:09:08.036 might be vulnerable, but most importantly, what my highest 151 00:09:08.096 --> 00:09:11.856 priority cyber risks are?" And that's the stuff quite frankly, 152 00:09:11.917 --> 00:09:15.859 oftentimes, the CISO wants to be able to communicate to the board 153 00:09:15.920 --> 00:09:19.498 of directors, right? We're really trying to uplift a lot of 154 00:09:18.000 --> 00:09:46.170 little bit about the work you're doing as chief strategy and AI 155 00:09:19.559 --> 00:09:23.319 this communication, not to just be about digging in the weeds, 156 00:09:23.380 --> 00:09:27.079 so to speak, but really being talking about here are the five 157 00:09:27.140 --> 00:09:30.839 or 10 top cyber risks that are facing us today bespoke to our 158 00:09:30.900 --> 00:09:34.721 organization. And here's where we already have countermeasures. 159 00:09:34.782 --> 00:09:38.602 And here's where we're going to make some investments and shore 160 00:09:38.663 --> 00:09:39.270 things up. 161 00:09:46.170 --> 00:09:50.250 officer, and particularly around the AI labs that Darktrace runs. 162 00:09:50.670 --> 00:09:53.100 What have been some of the key areas of focus or investment for 163 00:09:53.100 --> 00:09:54.780 you in terms of the AI labs? 164 00:09:56.070 --> 00:09:58.950 Nicole Eagan: Yeah, so in terms of the Darktrace AI research 165 00:09:58.950 --> 00:10:03.330 center, I feel it's absolutely fundamental and foundational to 166 00:10:03.330 --> 00:10:06.960 how we operate as a company. Rather than operate on pure kind 167 00:10:06.960 --> 00:10:10.170 of what many vendors call product roadmaps. We actually 168 00:10:10.170 --> 00:10:14.460 start by looking at breakthrough research. And we actually allow 169 00:10:14.460 --> 00:10:19.200 our researchers which we have 150 members, 80 with master's 170 00:10:19.200 --> 00:10:23.550 degrees, 30 doctorates in there, very broad, everything from 171 00:10:23.550 --> 00:10:28.410 astrophysics to linguistics to data science. And they've 172 00:10:28.410 --> 00:10:32.070 actually produced numerous award-winning breakthroughs from 173 00:10:32.070 --> 00:10:36.960 the center that later make its way into actual products. I'll 174 00:10:36.960 --> 00:10:40.890 highlight maybe some of those top-level breakthroughs. One is, 175 00:10:41.040 --> 00:10:43.440 and this was the foundation of when we founded Darktrace, it 176 00:10:43.440 --> 00:10:47.310 was using epidemiology theory to identify the most infectious 177 00:10:47.310 --> 00:10:52.530 devices inside an organization. We also had some real 178 00:10:52.530 --> 00:10:56.550 breakthroughs around autonomous response back and actually about 179 00:10:56.550 --> 00:11:01.440 the 2015 timeframe. And this was - our approach to autonomous 180 00:11:01.440 --> 00:11:05.910 response was that it should work seamlessly across all your 181 00:11:05.910 --> 00:11:09.240 environments, it should work, it doesn't matter where the 182 00:11:09.240 --> 00:11:12.150 attacker is coming from, because they're going to pivot. So we 183 00:11:12.150 --> 00:11:16.110 have this view that it needed to work seamlessly across your 184 00:11:16.110 --> 00:11:20.610 email, your SaaS applications, your cloud, your endpoint, and, 185 00:11:20.610 --> 00:11:24.660 if necessary, your OT and IoT devices. So that very broad 186 00:11:24.660 --> 00:11:28.320 thing. In Darktrace Prevent, we had a whole research project 187 00:11:28.320 --> 00:11:33.360 about using graph theory to map out risk assessment paths across 188 00:11:33.360 --> 00:11:36.930 the enterprise and do that risk prioritization. I think one of 189 00:11:36.930 --> 00:11:39.030 our most interesting breakthroughs is what we call 190 00:11:39.030 --> 00:11:44.100 cyber AI analysts. And it's the ability to have the AI. In this, 191 00:11:44.100 --> 00:11:49.290 AI was quite different for us. It learned off of human threat 192 00:11:49.320 --> 00:11:57.000 analysis, and it learns how to hypothesize about what is this 193 00:11:57.000 --> 00:12:03.150 I'm looking at? What are the theories or hypotheses that I 194 00:12:03.150 --> 00:12:07.260 need to disprove or prove to validate that this is actually a 195 00:12:07.260 --> 00:12:13.740 threat? And that AI analysts say it saves teams time, about 92% 196 00:12:13.740 --> 00:12:17.010 time savings on threat investigation, which is, we 197 00:12:17.010 --> 00:12:22.620 know, with the skill shortage with headcount shortages, AI 198 00:12:22.620 --> 00:12:25.110 analysts just makes all the difference in the world. So 199 00:12:25.530 --> 00:12:29.160 literally, we've filed well over 100 patents out of the research 200 00:12:29.160 --> 00:12:32.640 center. So I've just listed a handful. But the exciting thing 201 00:12:32.640 --> 00:12:36.420 to me is that we actually publish all this research on our 202 00:12:36.420 --> 00:12:40.800 website, and people can just peruse it and kind of, we hope 203 00:12:40.800 --> 00:12:43.470 through this effort as well as through productizing, we're kind 204 00:12:43.470 --> 00:12:46.500 of giving back overall to the broader cyber community. 205 00:12:48.330 --> 00:12:49.980 Michael Novinson: Darktrace announced earlier this month 206 00:12:50.010 --> 00:12:53.820 that it was unable to come to terms on a take private offer 207 00:12:53.820 --> 00:12:56.940 from Thoma Bravo and would instead continue forward as a 208 00:12:56.940 --> 00:13:00.360 publicly traded company. Two part question for you. First, 209 00:13:00.840 --> 00:13:03.990 why did Dynatrace decide to move forward as a public company 210 00:13:03.990 --> 00:13:07.290 rather than going private under Thoma Bravo? And then secondly, 211 00:13:07.290 --> 00:13:09.420 what does that decision mean for your customers? 212 00:13:10.710 --> 00:13:13.020 Nicole Eagan: Well, I think it has always been our path, right? 213 00:13:13.020 --> 00:13:17.370 I think when you you started out a company, you envision building 214 00:13:17.370 --> 00:13:21.930 a successful business, expanding to over 7400 customers, as we've 215 00:13:21.930 --> 00:13:25.890 done, we just completed our first full year as a public 216 00:13:25.890 --> 00:13:29.880 company. So we're fairly newly public. And we just announced 217 00:13:30.480 --> 00:13:34.890 really strong stellar results that were, you know, right on 218 00:13:34.890 --> 00:13:39.300 target, and exceeding expectations. So I feel like in 219 00:13:39.300 --> 00:13:42.870 many ways, we're at the start of our journey as a public company, 220 00:13:42.870 --> 00:13:47.460 not at the end. And I think it just, we need to follow the 221 00:13:47.460 --> 00:13:53.400 path, business is growing tremendously well, the customer 222 00:13:53.400 --> 00:13:57.060 satisfaction is very high. And I think especially if I look at 223 00:13:57.300 --> 00:14:00.780 the recent acquisition of Cybersprint, the general 224 00:14:00.780 --> 00:14:03.690 availability and the quick uptake of the Prevent offering, 225 00:14:03.960 --> 00:14:07.770 and we also last year announced what we call our technology 226 00:14:07.770 --> 00:14:10.830 vision of a continuous cyber AI loop. And we're working on these 227 00:14:10.830 --> 00:14:15.390 really neat areas about self healing technology. And so, as I 228 00:14:15.390 --> 00:14:19.500 said, you know, for our executive team, our leadership, 229 00:14:19.680 --> 00:14:23.520 we feel like we're still at the beginning of our journey as a 230 00:14:23.520 --> 00:14:26.370 public company, and we just have so much more we want to 231 00:14:26.370 --> 00:14:29.250 accomplish, and that we want to provide to our customers and 232 00:14:29.250 --> 00:14:31.260 give back to the greater community. 233 00:14:32.490 --> 00:14:33.810 Michael Novinson: As you referenced in your answer, it's 234 00:14:33.810 --> 00:14:36.330 been a little over a year since Darktrace went public on the 235 00:14:36.330 --> 00:14:39.480 London Stock Exchange. What does that mean for your organization, 236 00:14:39.480 --> 00:14:42.450 both in terms of customer visibility and awareness, as 237 00:14:42.450 --> 00:14:45.840 well as from a financial standpoint in terms of money for 238 00:14:45.840 --> 00:14:47.790 R&D, technology, etc.? 239 00:14:48.440 --> 00:14:51.440 Nicole Eagan: Well, I think some of the reasons you do go public 240 00:14:51.470 --> 00:14:56.120 is actually to be able to increase your investment in R&D. 241 00:14:56.150 --> 00:14:59.840 It also helped us obviously fund things like the Cybersprint 242 00:14:59.840 --> 00:15:03.020 acquisition, and really to fulfill this greater vision we 243 00:15:03.020 --> 00:15:07.010 have around our Darktrace cyber AI loop. So I think you know, 244 00:15:07.010 --> 00:15:10.520 you're dead on. That is why you kind of go public. So you get 245 00:15:10.520 --> 00:15:15.950 access to that type of funds to make those increases in R&D and 246 00:15:15.950 --> 00:15:18.650 technology investment and really be able to deliver to your 247 00:15:18.650 --> 00:15:21.050 customers on this bigger vision that we have. 248 00:15:23.330 --> 00:15:24.620 Michael Novinson: Very interesting. Let me ask you 249 00:15:24.620 --> 00:15:27.650 here, finally, our readership at ISMG are primarily chief 250 00:15:27.650 --> 00:15:30.500 information security officers, what do you feel is the biggest 251 00:15:30.500 --> 00:15:33.620 area that CISOs are overlooking when it comes to their security 252 00:15:33.620 --> 00:15:34.580 strategy today? 253 00:15:34.000 --> 00:15:37.551 Nicole Eagan: It's difficult, but very good question, I think 254 00:15:37.627 --> 00:15:42.311 elevating the conversation to one that's much more about cyber 255 00:15:42.387 --> 00:15:46.921 risks to the company. And really understanding what were the 256 00:15:46.996 --> 00:15:51.530 impacts of a particular attack be? And how could you get out 257 00:15:51.606 --> 00:15:56.291 there in front of it? How could you strengthen and harden your 258 00:15:56.366 --> 00:16:00.900 environment? And how can you articulate this in the language 259 00:16:00.976 --> 00:16:05.132 of the business? Right. So you know, if this particular 260 00:16:05.207 --> 00:16:10.119 instance were to happen, there's this kind of probability that it 261 00:16:10.194 --> 00:16:14.501 would result in a business outage, and it could take us X 262 00:16:14.577 --> 00:16:19.338 amount of time to recover from that. So I think looking at this 263 00:16:19.413 --> 00:16:23.720 bigger picture of cyber risk. And as we get into the next 264 00:16:23.796 --> 00:16:28.405 area, this area I alluded to, that's coming out of a research 265 00:16:28.481 --> 00:16:33.392 center now around Heal, which is not a product yet. It's still in 266 00:16:33.468 --> 00:16:38.228 the research phases. But I think that gets into really the area 267 00:16:38.304 --> 00:16:43.216 of cyber resilience. And I think overall, and I do think CISOs do 268 00:16:43.291 --> 00:16:47.749 a great job of this, really looking at where all the places 269 00:16:47.825 --> 00:16:51.905 that you can automate and augment your human teams and 270 00:16:51.981 --> 00:16:56.288 lift them up to deal with the future of your business and 271 00:16:56.364 --> 00:17:01.124 working with other departments and making sure that security is 272 00:17:01.200 --> 00:17:05.733 built into your next product offering. And let's try to lift 273 00:17:05.809 --> 00:17:10.494 them out of having to kind of live in this ongoing environment 274 00:17:10.569 --> 00:17:14.348 that we've seen for the past, you know, decades of 275 00:17:14.423 --> 00:17:19.108 firefighting. So I think that's the transition that I actually 276 00:17:19.184 --> 00:17:23.717 proactively see a lot of CISOs making, lifting up into cyber 277 00:17:23.793 --> 00:17:27.949 risks, planning for cyber resilience, and then figuring 278 00:17:28.025 --> 00:17:32.710 out how to automate and augment to help out their human teams. 279 00:17:34.000 --> 00:17:35.890 Michael Novinson: Interesting stuff. Nicole, thank you so much 280 00:17:35.920 --> 00:17:40.210 for the time. Thank you. We've been speaking with Nicole Egan. 281 00:17:40.210 --> 00:17:43.870 She is the chief strategy and AI officer Darktrace. For 282 00:17:43.870 --> 00:17:46.240 Information Security Media Group, this is Michael 283 00:17:46.240 --> 00:17:48.160 Nathanson. Have a nice day.