WEBVTT 1 00:00:00.630 --> 00:00:02.130 Michael Novinson: Hello, this is Michael Novinson with 2 00:00:02.130 --> 00:00:04.500 Information Security Media Group. Today, we're going to be 3 00:00:04.500 --> 00:00:07.440 taking a deep dive into data security. To explore this 4 00:00:07.440 --> 00:00:10.230 further, I am joined by Manny Rivelo. He is the CEO at 5 00:00:10.230 --> 00:00:12.090 Forcepoint. Hi, Manny, how are you? 6 00:00:12.000 --> 00:00:13.680 Manny Rivelo: How are you, Michael? Pleasure to be here. 7 00:00:14.560 --> 00:00:16.240 Michael Novinson: Want to start by taking a look at the threat 8 00:00:16.240 --> 00:00:18.580 landscape, getting a sense of some of the newest ways that you 9 00:00:18.580 --> 00:00:22.240 see cybercriminals and adversaries using an exploding 10 00:00:22.240 --> 00:00:23.710 data. What do you see there? 11 00:00:23.000 --> 00:00:26.120 Manny Rivelo: Well, I mean, there's just so many different 12 00:00:26.120 --> 00:00:30.140 ways. I think the concept is, I'll start off by saying there's 13 00:00:30.140 --> 00:00:33.710 a renaissance in data protection, that wasn't there. 14 00:00:34.190 --> 00:00:37.730 You could even argue a couple of years ago. And partly because I 15 00:00:37.730 --> 00:00:42.410 think the market was looking at data loss prevention as data 16 00:00:42.410 --> 00:00:45.980 protection. But you've seen this new renaissance, the need to 17 00:00:45.980 --> 00:00:49.160 discover where your data is, right? And discover it using 18 00:00:49.160 --> 00:00:52.460 intelligent mechanisms that AI or ML mechanisms, you know, 19 00:00:52.460 --> 00:00:55.580 classifying your data, are having good data governance, 20 00:00:55.580 --> 00:00:58.310 who's touching your data, who has access to your data, who's 21 00:00:58.310 --> 00:01:01.580 moving your data, and obviously, data loss prevention, and you're 22 00:01:01.580 --> 00:01:05.630 seeing that across the whole threat landscape. And the basic 23 00:01:05.660 --> 00:01:08.450 attacks don't differ. I mean, they're getting much more 24 00:01:08.450 --> 00:01:11.990 sophisticated. There's no question around that. But you 25 00:01:11.990 --> 00:01:14.240 still continue to see the phishing attacks, right? So you 26 00:01:14.240 --> 00:01:17.180 continue to see the malware attacks, you continue to see all 27 00:01:17.180 --> 00:01:20.090 of that, to try to exfiltrate that data. And it's kind of 28 00:01:20.090 --> 00:01:23.540 funny, because now we're even seeing, we had an engineer 29 00:01:23.540 --> 00:01:28.250 inside our corporation in two hours, using nothing but 30 00:01:28.250 --> 00:01:32.990 ChatGPT. And asking it questions, writing zero lines of 31 00:01:32.990 --> 00:01:38.750 code, we're able to create a zero-day vulnerability attack, 32 00:01:39.170 --> 00:01:42.800 and that basically as guaranteed exfiltrate data because it was 33 00:01:42.980 --> 00:01:47.480 tested on VirusTotal. And nobody caught it in the market, right? 34 00:01:47.480 --> 00:01:51.560 So the technology is there, even now to create even much more 35 00:01:51.560 --> 00:01:54.920 sophisticated attacks than ever before. So you're gonna continue 36 00:01:54.920 --> 00:01:57.830 to see it. But assume it's not just traditional mechanisms. 37 00:01:57.980 --> 00:01:59.630 There's new mechanisms are coming our way. 38 00:02:00.110 --> 00:02:01.490 Michael Novinson: So what are the implications for cyber 39 00:02:01.490 --> 00:02:04.460 defenders? If you have an engineer, no code, 2 hours, 40 00:02:04.640 --> 00:02:08.210 generative AI can produce a highly effective cyberattack. 41 00:02:08.240 --> 00:02:10.640 What does that mean defenders need to be thinking about in 42 00:02:10.640 --> 00:02:10.880 turn? 43 00:02:10.900 --> 00:02:13.660 Manny Rivelo: I think you got to look at new tools, the old tools 44 00:02:13.660 --> 00:02:16.360 don't satisfy. I mean, if you're using like sandboxing 45 00:02:16.360 --> 00:02:19.150 technology, it's pretty straightforward, because all you 46 00:02:19.150 --> 00:02:22.660 got to do is pause the attack, right? If you pause it over two 47 00:02:22.660 --> 00:02:25.570 minutes, most sandboxes, don't catch it as an example. So 48 00:02:25.660 --> 00:02:29.140 you're really going to have to start using AI even to analyze 49 00:02:29.140 --> 00:02:32.020 that data and become much more intelligent. There are some 50 00:02:32.020 --> 00:02:34.870 great solutions out there. Obviously, the whole concept of 51 00:02:34.870 --> 00:02:37.660 zero trust in the way you engineer your network helps with 52 00:02:37.660 --> 00:02:41.170 that. And it is unique technologies, like content 53 00:02:41.170 --> 00:02:43.870 disarmament and reconstruction technology, which actually 54 00:02:44.080 --> 00:02:47.710 assumes that all files, everything is infected. So it's 55 00:02:47.710 --> 00:02:51.940 zero trust around your content, to be able to give you greater 56 00:02:51.940 --> 00:02:55.090 security. So you've got to really revisit the way you're 57 00:02:55.090 --> 00:02:58.000 looking at data protection, right? And the way attacks are 58 00:02:58.000 --> 00:02:59.080 coming into your organization. 59 00:02:59.710 --> 00:03:01.060 Michael Novinson: Realize we've had a lot of digital 60 00:03:01.060 --> 00:03:03.070 transformation, but at the same time, we still have a lot of 61 00:03:03.070 --> 00:03:06.850 legacy on-premise systems. What are some of the best practices 62 00:03:06.850 --> 00:03:09.790 for enterprises when they're thinking about securing their 63 00:03:09.790 --> 00:03:11.440 data that still lives on-premise? 64 00:03:11.480 --> 00:03:15.680 Manny Rivelo: Yeah, I think it's a great question, because the 65 00:03:15.710 --> 00:03:18.560 on-prem environment is not going away. There's always gonna be 66 00:03:18.560 --> 00:03:21.800 data to some degree on-prem, there are employees that work 67 00:03:21.980 --> 00:03:24.920 on-prem, you may have call centers, you may have customer 68 00:03:24.920 --> 00:03:28.310 success teams that are working on-prem, you may have older 69 00:03:28.310 --> 00:03:33.560 systems that have data that is on-prem. Right. So our approach 70 00:03:33.560 --> 00:03:38.480 has been one around data-first SASE, which is the construct of 71 00:03:38.480 --> 00:03:42.350 everything that SASE is, but the concept of data first is putting 72 00:03:42.350 --> 00:03:47.180 data protection in line with SASE. So for example, at RSA, 73 00:03:47.210 --> 00:03:50.930 we're announcing Data Protection Everywhere, which means we take 74 00:03:50.930 --> 00:03:53.930 the on-prem policies that organizations have built over 75 00:03:53.930 --> 00:03:56.930 the last decade to two decades. And now we apply that same 76 00:03:56.930 --> 00:04:00.710 policy across all the SASE channels. But think about one 77 00:04:00.710 --> 00:04:04.040 unified data protection policy, across your email, across your 78 00:04:04.040 --> 00:04:08.630 USB ports, across your Wi-Fi, across your MiFi connections, 79 00:04:08.630 --> 00:04:12.560 across your Cosby channels, Etna channel, suite channels, making 80 00:04:12.560 --> 00:04:17.090 it very simple. So one head around data protection, applied, 81 00:04:17.120 --> 00:04:20.960 distributed everywhere - on-prem and off-prem. Right? From that perspective. 82 00:04:21.000 --> 00:04:25.020 Michael Novinson: And from an implementation standpoint, I 83 00:04:21.020 --> 00:04:21.800 84 00:04:25.020 --> 00:04:27.540 mean, what does that entail on the back end in order to get 85 00:04:28.020 --> 00:04:31.350 in-line data protection into SASE? What does that involve 86 00:04:31.350 --> 00:04:33.240 from an architectural standpoint? 87 00:04:33.000 --> 00:04:36.600 Manny Rivelo: Yeah, for us, it's basically an API interconnect 88 00:04:36.840 --> 00:04:39.780 that we've done across the two platforms that make it quite 89 00:04:39.780 --> 00:04:44.220 simple. So we are able to have centralized policy and control 90 00:04:44.220 --> 00:04:47.610 plane for data protection, and then distributed enforcement, if 91 00:04:47.610 --> 00:04:50.250 you think about that, right through a set of APIs, and we 92 00:04:50.250 --> 00:04:54.330 can extend that to applications if we want to also so it's been 93 00:04:54.360 --> 00:04:56.910 as part of our security simplified initiative. We've 94 00:04:56.910 --> 00:05:00.360 done that integration for you, right using our technologies. So 95 00:05:00.510 --> 00:05:02.880 it's out of the box. As a matter of fact, we already have 96 00:05:02.880 --> 00:05:06.180 customers in production with the technology, who were using both 97 00:05:06.180 --> 00:05:08.790 platforms. And as we went into beta, they turned it on 98 00:05:08.790 --> 00:05:10.320 production and seamlessly working. 99 00:05:11.230 --> 00:05:12.850 Michael Novinson: So I was curious I know we've been 100 00:05:12.850 --> 00:05:15.670 talking about data protection. But in terms of initial access 101 00:05:15.670 --> 00:05:19.420 points for data exfiltration or data-based hacks, you had 102 00:05:19.420 --> 00:05:22.270 mentioned malware, you mentioned phishing, as well, what tend to 103 00:05:22.270 --> 00:05:25.330 be the most common entry points for adversaries who are going 104 00:05:25.330 --> 00:05:27.430 after a victim's data? 105 00:05:27.720 --> 00:05:29.790 Manny Rivelo: It's still the basic channels. I mean, it's 106 00:05:29.790 --> 00:05:32.460 still phishing attacks, it's still malware, those are some of 107 00:05:32.460 --> 00:05:36.540 the most common forms of getting in there. The old mechanisms 108 00:05:36.540 --> 00:05:39.570 still work and work quite well, because you're dealing with 109 00:05:39.570 --> 00:05:43.530 human behavior, right? No matter how much you coach people not to 110 00:05:43.530 --> 00:05:46.320 click on that URL that's embedded in an email. There's 111 00:05:46.320 --> 00:05:49.050 always a percentage of every organization that clicks on that 112 00:05:49.050 --> 00:05:53.820 URL. So those are still very simplistic tools that are coming 113 00:05:53.820 --> 00:05:57.180 out. There are more sophisticated attacks. People 114 00:05:57.180 --> 00:06:00.960 are embedding malware in images, right? As you're downloading 115 00:06:00.960 --> 00:06:04.440 images from the web, whether that be your favorite picture of 116 00:06:04.440 --> 00:06:07.080 a cat, there might be malware inside that because it's easy to 117 00:06:07.080 --> 00:06:11.220 hide the malware inside an image and image files quite large. And 118 00:06:11.220 --> 00:06:13.320 therefore it's easy to inject code inside those things. So 119 00:06:13.320 --> 00:06:16.350 you're seeing other forms of attack through stenography and 120 00:06:16.350 --> 00:06:19.170 things of that nature. But there's no question that basic 121 00:06:19.320 --> 00:06:21.930 forms are still coming in the door, today. 122 00:06:23.100 --> 00:06:25.170 Michael Novinson: I know two terms that tend to be synonymous 123 00:06:25.440 --> 00:06:29.070 with one another - SASE and zero trust. And SASE being kind of 124 00:06:29.070 --> 00:06:31.500 the actualization, the realization of a zero trust 125 00:06:31.500 --> 00:06:35.010 architecture. What's curious, given all of the federal initiatives 126 00:06:35.010 --> 00:06:37.740 in the United States around zero trust, what are the implications 127 00:06:37.740 --> 00:06:40.500 then in the private sector of the federal of the attention the 128 00:06:40.500 --> 00:06:42.240 U.S. government has been paying to zero trust? 129 00:06:42.000 --> 00:06:44.250 Manny Rivelo: Yeah, I mean, it means different things for the 130 00:06:44.250 --> 00:06:47.760 government. Also, there are a set of zero trust technologies, 131 00:06:47.760 --> 00:06:50.340 which are synonymous with the commercial movement, if you 132 00:06:50.340 --> 00:06:55.110 will, around implicit explicit security, separating your users 133 00:06:55.230 --> 00:06:57.900 from the applications of the data and only at the point of 134 00:06:57.900 --> 00:07:01.050 connection, assuming not trusting anything, but at the 135 00:07:01.050 --> 00:07:03.810 point of connection, authenticating a user to only 136 00:07:03.810 --> 00:07:07.320 the data or applications that that user should have. That is 137 00:07:07.320 --> 00:07:09.450 also something that we see in the government. So you're 138 00:07:09.450 --> 00:07:13.320 starting to see that awareness. I would say two years ago, zero 139 00:07:13.320 --> 00:07:17.190 trust and SASE were unknown terms for most of the 140 00:07:17.190 --> 00:07:20.520 enterprise. Today, they're very well-known terms. There are very 141 00:07:20.520 --> 00:07:24.840 few organizations that do not have either strategies afoot, or 142 00:07:24.840 --> 00:07:28.290 plans to put strategies afoot. Now to the government, it means 143 00:07:28.290 --> 00:07:32.340 even more, because there are technologies inside the 144 00:07:32.340 --> 00:07:35.460 government space, like cross-domain solutions, diode 145 00:07:35.460 --> 00:07:39.300 technologies, insider threat technologies that or even other 146 00:07:39.300 --> 00:07:43.560 forms of zero trust. But it's synonymous now in the commercial 147 00:07:43.560 --> 00:07:46.380 space. When we talk to customers, it's hard not to get 148 00:07:46.380 --> 00:07:50.070 into a form of SASE or zero trust conversation. 149 00:07:50.730 --> 00:07:52.320 Michael Novinson: So for organizations who are maybe 150 00:07:52.320 --> 00:07:55.050 earlier in their journey, what tends to be the first steps that 151 00:07:55.050 --> 00:07:58.170 a company will take if they're looking to begin their journey 152 00:07:58.200 --> 00:07:59.010 to zero trust? 153 00:07:59.000 --> 00:08:01.490 Manny Rivelo: Yeah, and it varies by industry, and it 154 00:08:01.490 --> 00:08:05.180 varies more by customer size on the segmentation. If you're a 155 00:08:05.180 --> 00:08:10.100 large enterprise as a strategic account, you're usually moving 156 00:08:10.100 --> 00:08:15.050 from point products, to suite of platforms, if you will. Right, a 157 00:08:15.050 --> 00:08:19.640 suite of technology, and usually insert one, you may insert with 158 00:08:19.940 --> 00:08:24.530 CASB, you may insert with zero trust, zero trust network 159 00:08:24.530 --> 00:08:27.770 access, you may insert with a SWG or something of that nature. 160 00:08:27.770 --> 00:08:30.620 So they kind of do a replacement, get that 161 00:08:30.620 --> 00:08:33.110 established inside an organization and they branch 162 00:08:33.110 --> 00:08:36.290 out. If you're under organizations of 10,000 users, 163 00:08:36.470 --> 00:08:40.970 we're seeing them move quicker to adopting the suite, right and 164 00:08:40.970 --> 00:08:43.550 taking out multiple vendors, because they don't have the 165 00:08:43.550 --> 00:08:46.520 expertise, they don't have the we totally integrate the 166 00:08:46.520 --> 00:08:48.560 technology, we make it very simple for them to take 167 00:08:48.560 --> 00:08:51.860 advantage of the technology, they get a ROI benefit from it, 168 00:08:51.890 --> 00:08:55.370 as well as security benefit from it. So it does vary by customer 169 00:08:55.370 --> 00:09:00.380 size. But it you could argue on the high end of the enterprise, 170 00:09:00.590 --> 00:09:03.860 it's a replacement for an existing tech. And in the 171 00:09:03.890 --> 00:09:06.920 smaller end of the market, it's the replacement of multiple 172 00:09:06.920 --> 00:09:08.870 techs for a platform suite. 173 00:09:09.650 --> 00:09:11.420 Michael Novinson: Finally, here, want to get a sense to know 174 00:09:11.540 --> 00:09:14.300 ChatGPT has been such a hot topic for the past five months. 175 00:09:14.570 --> 00:09:17.240 At a high level, what do you feel the impact of generative AI 176 00:09:17.240 --> 00:09:18.440 will be on the cyber industry? 177 00:09:18.480 --> 00:09:21.161 Manny Rivelo: I mean, it's gonna like I gave you the example 178 00:09:21.219 --> 00:09:24.659 before, right? Where we had engineers using it and creating 179 00:09:24.717 --> 00:09:28.273 malware for it. So the concept is there's two use cases I get 180 00:09:28.331 --> 00:09:32.004 into conversation with customers is ChatGPT is going to make it 181 00:09:32.062 --> 00:09:35.152 back into our enterprises because it's going to drive 182 00:09:35.210 --> 00:09:38.708 productivity, meaning it's gonna drive profit. Think about a 183 00:09:38.766 --> 00:09:42.380 simple use case where you could have your subscribers asking a 184 00:09:42.439 --> 00:09:46.111 questions to a corporation, and your ChatGPT bot answering that 185 00:09:46.169 --> 00:09:49.551 question. So there there's a risk of exfiltration of data, 186 00:09:49.609 --> 00:09:52.932 right? So how do you protect that and data protection can 187 00:09:52.990 --> 00:09:56.779 help you there. But there's also the concept of your organization 188 00:09:56.837 --> 00:10:00.102 using public AI and asking it questions, and that engine 189 00:10:00.160 --> 00:10:03.425 inferring right from the information. And so there's two 190 00:10:03.483 --> 00:10:06.806 ways that this is coming at it. I look at AI or you know, 191 00:10:06.864 --> 00:10:10.303 ChatGPT, or whatever it may be. It's going to make its way. 192 00:10:10.362 --> 00:10:13.918 We've all stood here 15 years ago, and many enterprises said, 193 00:10:13.976 --> 00:10:17.357 we're not going to use the cloud, but now we are using the 194 00:10:17.415 --> 00:10:20.738 cloud, right? We've all said that even earlier, we're not 195 00:10:20.796 --> 00:10:24.352 going to use Wi-Fi technology or we're not going to use this. 196 00:10:24.411 --> 00:10:27.325 We're all using these technologies. They drive too 197 00:10:27.384 --> 00:10:31.056 much improvement inside what you need to do to drive growth for 198 00:10:31.115 --> 00:10:34.321 your organizations and productivity for your employees. 199 00:10:34.379 --> 00:10:37.644 But with that said, new mechanisms gonna have to come in 200 00:10:37.702 --> 00:10:41.083 to protect the enterprises right, from this technology. So 201 00:10:41.141 --> 00:10:44.464 it's a great boost from an innovation perspective, but it 202 00:10:44.522 --> 00:10:48.720 also can be malicious if not protected correctly inside an organization. 203 00:10:49.080 --> 00:10:50.550 Michael Novinson: Be a fascinating space to watch. 204 00:10:50.550 --> 00:10:52.260 Manny, thank you so much here for the time. 205 00:10:52.320 --> 00:10:53.640 Manny Rivelo: My pleasure. Thank you for everything. 206 00:10:53.000 --> 00:10:55.640 Michael Novinson: Of course. We've been speaking with Manny 207 00:10:55.640 --> 00:10:59.000 Rivelo. He is the CEO at Forcepoint. For Information 208 00:10:59.000 --> 00:11:02.150 Security Media Group, this is Michael Novinson. Have a nice 209 00:11:02.150 --> 00:11:02.450 day.