WEBVTT 1 00:00:07.350 --> 00:00:09.990 Anna Delaney: Hello and welcome to this identity security 2 00:00:09.990 --> 00:00:13.170 special edition of the ISMG Editors' Panel. I'm Anna 3 00:00:13.170 --> 00:00:15.990 Delaney. And this week, our discussions range from taking a 4 00:00:15.990 --> 00:00:18.690 hard look at the U.S. government's approach to digital 5 00:00:18.690 --> 00:00:22.260 identity for the implications of generative AI on identity and 6 00:00:22.260 --> 00:00:25.770 access management. And joining us as our guide across the 7 00:00:25.770 --> 00:00:30.120 slippery worlds of cybersecurity identity and AI is our very good 8 00:00:30.120 --> 00:00:33.870 friend Jeremy Grant, managing director, technology business 9 00:00:33.870 --> 00:00:38.130 strategy at Venable LLP. Jeremy, wonderful to have you back with 10 00:00:38.130 --> 00:00:38.490 us. 11 00:00:38.640 --> 00:00:40.050 Jeremy Grant: Great to be here. Thanks for having me. 12 00:00:40.830 --> 00:00:43.830 Anna Delaney: We're also joined by ISMG superstars. Tom Field, 13 00:00:43.860 --> 00:00:47.010 senior vice president of editorial, and Mathew Shwartz, 14 00:00:47.190 --> 00:00:49.800 executive editor of DataBreachToday and Europe. 15 00:00:50.100 --> 00:00:52.200 Great to have this particular band back together. 16 00:00:52.560 --> 00:00:54.750 Tom Field: It's a good group. Great to be here. 17 00:00:55.260 --> 00:00:58.170 Anna Delaney: Well, Jeremy, we'd first love to ask you, where are 18 00:00:58.170 --> 00:00:59.940 you virtually in the world today, 19 00:01:00.260 --> 00:01:03.440 Jeremy Grant: I am virtually at the Vasa Museum in Stockholm, 20 00:01:03.440 --> 00:01:08.990 Sweden, which if you can see behind me, is a place I got to 21 00:01:08.990 --> 00:01:13.460 visit about six weeks ago. It's a fascinating museum. I'd say 22 00:01:13.490 --> 00:01:17.450 unlike any other. It's basically a - I wouldn't even say 23 00:01:17.450 --> 00:01:21.200 replication - it is an actual 16th century Swedish warship 24 00:01:21.620 --> 00:01:25.490 that sunk about 20 minutes after it was launched, and then was 25 00:01:26.030 --> 00:01:30.440 dragged up from the bottom of the harbor in Stockholm about 50 26 00:01:30.470 --> 00:01:34.010 years ago and restored. And so it's actually a chance to see 27 00:01:34.010 --> 00:01:38.450 what a warship from that time would have looked like. Although 28 00:01:38.450 --> 00:01:41.870 I hesitate to say it is a warship. And that because it's 29 00:01:41.870 --> 00:01:45.260 sunk after 20 minutes essentially was designed to be 30 00:01:45.740 --> 00:01:49.700 very narrow, very top heavy. And so the whole thing basically 31 00:01:49.700 --> 00:01:54.380 fell over once they had launched it. A good example of the need 32 00:01:54.380 --> 00:01:58.190 to perhaps beta test some things before you launch them to make 33 00:01:58.190 --> 00:02:01.160 sure they do the things that you actually want them to do. But 34 00:02:01.340 --> 00:02:02.360 that's where I am today. 35 00:02:02.000 --> 00:02:04.520 Anna Delaney: Very good. Well, I'm going to Stockholm in a 36 00:02:04.520 --> 00:02:07.490 couple of weeks. I'll have to pay a visit. Last time I was at 37 00:02:07.490 --> 00:02:09.980 the ABBA The Museum. We were tempted by the ABBA Museum. 38 00:02:10.430 --> 00:02:13.310 Jeremy Grant: You know, my wife and I were not like the biggest 39 00:02:13.310 --> 00:02:16.220 ABBA fans. So it was actually just around the corner from the 40 00:02:16.220 --> 00:02:20.690 Boston Museum, but we opted for the big ship, as opposed to the 41 00:02:20.690 --> 00:02:21.500 disco music. 42 00:02:22.170 --> 00:02:24.450 Anna Delaney: Good. Well, Tom, where are you? 43 00:02:24.000 --> 00:02:27.030 Tom Field: Well, first of all, I am aware, Anna, that you do a 44 00:02:27.030 --> 00:02:30.240 version of Waterloo, so perhaps we'll hear that sometime during 45 00:02:30.240 --> 00:02:31.260 our session today. 46 00:02:31.860 --> 00:02:32.820 Anna Delaney: Maybe! 47 00:02:32.000 --> 00:02:36.320 Tom Field: For where I am, it's not quite so exotic as where 48 00:02:36.320 --> 00:02:40.940 Jeremy is. I am in the Salem Witch Dungeonn. I visited the 49 00:02:40.940 --> 00:02:43.340 town of Salem, Massachusetts, a couple of weeks ago with my 50 00:02:43.340 --> 00:02:46.220 family. Of course, that's where they had the Salem witch trials 51 00:02:46.220 --> 00:02:50.420 in 1492. And this is remnants of the actual witch dungeon where 52 00:02:50.420 --> 00:02:55.250 the suspects were kept between for trial and final disposition. 53 00:02:56.200 --> 00:02:59.470 Anna Delaney: Very spooky, indeed. Matt witch trials in 54 00:02:59.470 --> 00:02:59.980 Scotland, 55 00:03:00.290 --> 00:03:03.800 Mathew Schwartz: I feel a bit downmarket from discos, dungeons 56 00:03:03.800 --> 00:03:08.450 and big boats. This is the city centre of Dundee, like many city 57 00:03:08.450 --> 00:03:12.050 centres these days, it's fallen on hard times. But it was 58 00:03:12.050 --> 00:03:15.530 interesting to me because very recently here, the phone booths 59 00:03:15.680 --> 00:03:20.390 were being taken out. And I just thought, I may have already 60 00:03:20.390 --> 00:03:23.510 expected them to be gone. But they've been there. And I think 61 00:03:23.510 --> 00:03:26.120 for a while, you'll know this better than if they were 62 00:03:26.120 --> 00:03:29.360 advertising things like WiFi services, which I don't think 63 00:03:29.390 --> 00:03:33.230 anybody ever used. I think people got up to worse things in 64 00:03:33.230 --> 00:03:36.770 there. So interesting, I think, given the era. 65 00:03:37.370 --> 00:03:39.440 Anna Delaney: Yeah, very interesting. Actually, and where 66 00:03:39.440 --> 00:03:43.670 I was in Sussex, couple of weeks back, all the old phone booths 67 00:03:43.670 --> 00:03:47.630 are now sort of book repository. So you could just go in, take a 68 00:03:47.630 --> 00:03:52.250 book and add one to the collection, but they're the old 69 00:03:52.310 --> 00:03:55.400 red phone boxes. Not not the nice ones. 70 00:03:56.180 --> 00:03:57.860 Jeremy Grant: What would Superman do in the modern era? 71 00:03:59.610 --> 00:04:04.860 Anna Delaney: Good question. Well, just to show you my 72 00:04:04.860 --> 00:04:08.970 backdrop, this is again from Sussex and it was a long ride to 73 00:04:08.970 --> 00:04:13.110 Rye Harbour Nature Reserve and a coastal walk. And it's home to 74 00:04:13.110 --> 00:04:16.530 over 4,000 species of plants and animals. So it's nice to get 75 00:04:16.530 --> 00:04:20.670 some sear. Well, Jeremy, we have a few questions for you. So at 76 00:04:20.670 --> 00:04:23.670 this moment, I'm going to hand over to Tom to begin 77 00:04:23.670 --> 00:04:24.450 proceedings. 78 00:04:24.000 --> 00:04:29.130 Tom Field: Okay. So witness will please take the oath. Jeremy, 79 00:04:29.280 --> 00:04:31.800 not long ago, you wrote an article for The Hill that was 80 00:04:31.800 --> 00:04:35.400 entitled Why Is Our Government Taking a Backseat on Digital 81 00:04:35.400 --> 00:04:37.770 Identity Issues, and the piece mentions that the White House 82 00:04:37.770 --> 00:04:41.700 left digital identity out of its implementation plan for the 83 00:04:41.700 --> 00:04:47.520 National Cybersecurity Strategy. Your opinion? What impact could 84 00:04:47.550 --> 00:04:51.480 this omission have on the overall cybersecurity landscape? 85 00:04:51.480 --> 00:04:54.000 It seems like a significant missed opportunity. 86 00:04:54.750 --> 00:04:57.210 Jeremy Grant: That was our take as well. So I think one of the 87 00:04:57.210 --> 00:04:59.850 projects that I lead from my perch at Venable was running an 88 00:04:59.850 --> 00:05:02.160 order connotation called The Better Identity Coalition, which 89 00:05:02.160 --> 00:05:05.220 has brought together a lot of companies, largely the buyers. 90 00:05:05.430 --> 00:05:09.240 Think about firms in tech and telecom and financial services, 91 00:05:09.240 --> 00:05:11.910 both traditional banking and fintech, health, that all need 92 00:05:11.910 --> 00:05:15.540 better identity systems. And I would say, when the National 93 00:05:15.540 --> 00:05:18.090 Cybersecurity Strategy came out in March, we were all 94 00:05:18.090 --> 00:05:21.240 collectively thrilled that it had a very robust section. It 95 00:05:21.240 --> 00:05:25.050 was strategic objective 4:5, about enhancing the digital 96 00:05:25.050 --> 00:05:29.610 identity ecosystem. So the take from the White House at the time 97 00:05:29.610 --> 00:05:32.220 was, "This is the strategy, what's really going to matter is 98 00:05:32.220 --> 00:05:34.380 the implementation plan." We were excited to see what would 99 00:05:34.380 --> 00:05:38.880 come out next. When it was released in July, they basically 100 00:05:38.880 --> 00:05:43.590 skipped from objective 4:4 to 4:6, as if digital identity had 101 00:05:43.590 --> 00:05:46.800 never been in the strategy. Now, in fairness to the White House, 102 00:05:46.800 --> 00:05:49.590 they stated, "This is an iterative document, there will 103 00:05:49.590 --> 00:05:54.900 be subsequent versions of it." And so, this does not mean that 104 00:05:54.900 --> 00:05:57.060 just because it was not included here, that there won't be 105 00:05:57.060 --> 00:06:00.900 something in the future. But I will say, this was the only 106 00:06:00.900 --> 00:06:03.690 strategic objective, and the entire strategy that got this 107 00:06:03.690 --> 00:06:06.060 treatment have been skipped over with the exception of one on 108 00:06:06.060 --> 00:06:08.580 privacy legislation where the strategy was very clear. The 109 00:06:08.580 --> 00:06:12.840 ball was in Congress's court. So in terms of what happened, it's 110 00:06:12.840 --> 00:06:19.050 a little hard to say. I think it is safe to say that maybe not 111 00:06:19.050 --> 00:06:25.230 all parts of the White House are equally enthusiastic, compared 112 00:06:25.230 --> 00:06:27.060 to some of the folks in the cybersecurity side, about 113 00:06:27.060 --> 00:06:29.580 looking to do something here. I do think that there's concerns 114 00:06:29.580 --> 00:06:32.730 about privacy and civil liberties. If the government 115 00:06:32.730 --> 00:06:35.850 does something broader on digital identity, that might be 116 00:06:35.850 --> 00:06:39.270 a concern. However, our point has been - and this was sort of 117 00:06:39.270 --> 00:06:42.270 the theme of the oped - choosing to do nothing is an active 118 00:06:42.270 --> 00:06:46.350 policy choice as well. And if there are concerns about privacy 119 00:06:46.350 --> 00:06:48.300 and civil liberties, and in fact, our coalition has 120 00:06:48.300 --> 00:06:51.810 articulated them a number of times, now is the time when some 121 00:06:51.810 --> 00:06:54.450 big things are happening in the digital identity space for the 122 00:06:54.450 --> 00:06:57.780 government to actually come in strongly. With the idea of 123 00:06:57.810 --> 00:07:01.080 outlining what good looks like, outlining where there are risks 124 00:07:01.080 --> 00:07:04.140 and taking some proactive steps to actually address it, doing 125 00:07:04.140 --> 00:07:06.720 nothing might actually put us in a worse place in terms of the 126 00:07:06.720 --> 00:07:09.810 outcomes we're going to see in five or 10 years, than doing 127 00:07:09.810 --> 00:07:10.230 something. 128 00:07:11.160 --> 00:07:12.960 Tom Field: Now you do suggest in that piece that there is an 129 00:07:12.960 --> 00:07:15.210 opportunity still for federal government to shape the 130 00:07:15.210 --> 00:07:18.930 direction of digital identity initiatives. How would you say 131 00:07:18.930 --> 00:07:21.900 government can steer these efforts in the right direction, 132 00:07:21.900 --> 00:07:24.600 as opposed to, as you said, doing nothing and perhaps having 133 00:07:24.600 --> 00:07:25.950 a setback even further? 134 00:07:25.000 --> 00:07:27.475 Jeremy Grant: Well, I think a lot of it comes back to what we 135 00:07:27.527 --> 00:07:30.634 think we should see in terms of follow up from the National 136 00:07:30.687 --> 00:07:33.899 Cybersecurity Strategy, which again, we thought that language 137 00:07:33.952 --> 00:07:36.953 in there was quite good. And just the fact that the White 138 00:07:37.006 --> 00:07:40.271 House would be leading on this. I think we've talked before: a 139 00:07:40.324 --> 00:07:43.483 big challenge in the U.S. when it comes to digital identity. 140 00:07:43.536 --> 00:07:46.695 And what we're really talking about here is what I would say 141 00:07:46.748 --> 00:07:49.539 challenges around remote identity proofing. How do we 142 00:07:49.592 --> 00:07:52.909 know who's online, when they're applying for a new account at a 143 00:07:52.962 --> 00:07:56.122 bank, at a government agency for something in the healthcare 144 00:07:56.174 --> 00:07:59.070 space. We have a big challenge in that, we don't have a 145 00:07:59.123 --> 00:08:02.230 national ID in the U.S. And we're not calling for that. But 146 00:08:02.283 --> 00:08:04.600 we do have a number of nationally recognized 147 00:08:04.652 --> 00:08:07.601 authoritative identity systems that are all stuck in the 148 00:08:07.654 --> 00:08:10.708 physical world, be it the birth certificate I got from the 149 00:08:10.761 --> 00:08:14.026 county in Michigan where I was born, the driver's license that 150 00:08:10.990 --> 00:09:48.580 Very well said. I'm going to turn this over to my colleague, 151 00:08:14.079 --> 00:08:17.080 my state DMV gives me, the social security number and the 152 00:08:17.133 --> 00:08:19.924 passport that the federal government gives me, all of 153 00:08:19.976 --> 00:08:22.978 those I can bring in to a physical building, and use that 154 00:08:23.031 --> 00:08:26.401 to prove who I am. But there are no counterparts to those in the 155 00:08:26.454 --> 00:08:29.666 digital world. And so what we have been advocating for is for 156 00:08:29.719 --> 00:08:32.773 the White House to lead an effort to bring together what I 157 00:08:32.825 --> 00:08:36.143 would call the big stakeholders at the federal, state and local 158 00:08:36.196 --> 00:08:39.303 level, who are issuing these authoritative credentials, all 159 00:08:39.355 --> 00:08:42.673 of whom have different efforts, I would say different levels of 160 00:08:42.726 --> 00:08:45.306 maturity underway to try and come up with digital 161 00:08:45.359 --> 00:08:48.465 counterparts. And again, take that time to define what good 162 00:08:48.518 --> 00:08:51.151 looks like, what are the outcomes we're looking to 163 00:08:51.204 --> 00:08:54.258 achieve, what are the risks that we want to anticipate and 164 00:08:54.311 --> 00:08:57.576 mitigate, and how do we have a plan to get from A to B? That's 165 00:08:57.628 --> 00:09:00.735 I think, where there's some important work to be done. I'll 166 00:09:00.788 --> 00:09:03.421 flag on that point. Just a couple of days ago, the 167 00:09:03.474 --> 00:09:06.159 Transportation Security Administration released 144 168 00:09:06.212 --> 00:09:09.319 pages of draft regulations around what digital counterparts 169 00:09:09.371 --> 00:09:12.478 to plastic drivers' licenses will look like for purposes of 170 00:09:12.531 --> 00:09:15.638 compliance with the REAL ID Act of 2005. Again, TSA is very 171 00:09:15.691 --> 00:09:19.008 focused on what are the things that you'll have to do to accept 172 00:09:19.061 --> 00:09:21.957 a digital credential, say the TSA checkpoint. That's an 173 00:09:22.010 --> 00:09:25.327 interesting use case. It's kind of a nice to have in the online 174 00:09:25.380 --> 00:09:28.645 identity proofing world, we have basically the equivalent of a 175 00:09:28.698 --> 00:09:31.963 raging wildfire, where there's millions of victims and tens of 176 00:09:32.015 --> 00:09:35.122 billions of dollars of losses each year because of identity 177 00:09:35.175 --> 00:09:38.071 theft and identity-related cybercrime. How do we have a 178 00:09:38.124 --> 00:09:41.231 broader national effort to go beyond just this narrow place 179 00:09:41.283 --> 00:09:44.338 where the TSA is focused and actually look at solving this 180 00:09:44.390 --> 00:09:45.760 problem more holistically? 181 00:09:48.610 --> 00:09:53.680 Matt. dandy, your witness from the phone booth. 182 00:09:52.650 --> 00:09:55.262 Mathew Schwartz: Jeremy, you co-authored a great post 183 00:09:55.335 --> 00:09:58.892 recently about what gender narrative AI means and 184 00:09:58.964 --> 00:10:03.392 especially for me, I thought you've started with this classic 185 00:10:03.464 --> 00:10:08.037 line, "Help me Obi-Wan Kenobi, you're my only hope." But as you 186 00:10:08.109 --> 00:10:12.537 mentioned in your post, what if this wasn't actually Princess 187 00:10:12.610 --> 00:10:16.892 Leia? What if this was a fake Princess Leia reaching out to 188 00:10:16.964 --> 00:10:21.392 Obi-Wan deepfake and Obi-Wan was being trolled by the Empire, 189 00:10:21.464 --> 00:10:26.110 everybody involved was duped. I think this is a great example of 190 00:10:26.182 --> 00:10:30.465 some of the risks. I mean, Hollywood, yeah, but some of the 191 00:10:30.537 --> 00:10:34.747 risks that AI is bringing to digital identity and security 192 00:10:34.820 --> 00:10:39.102 and our proclivity, perhaps to overlook those in moments of 193 00:10:39.174 --> 00:10:43.312 distress or adventure. Where do you think some of the big 194 00:10:43.384 --> 00:10:47.158 problems are right now? And we talked about a lot of 195 00:10:47.231 --> 00:10:51.659 possibilities and potential, but what do you see as near-term 196 00:10:51.731 --> 00:10:52.530 risks here? 197 00:10:53.200 --> 00:10:55.600 Jeremy Grant: Well, I'll say first, just from the perspective 198 00:10:55.600 --> 00:10:59.290 of historical accuracy, keep in mind that Star Wars was a long 199 00:10:59.320 --> 00:11:03.100 time ago in a galaxy far, far away. So perhaps they had not 200 00:11:03.100 --> 00:11:07.240 created generative AI, even if they had managed to sort out 201 00:11:08.500 --> 00:11:11.680 space travel and some other things in a pretty cool fashion. 202 00:11:12.160 --> 00:11:15.190 But I think that the risks that we're seeing these days, getting 203 00:11:15.190 --> 00:11:18.190 back to deepfakes, we've been talking for years in the 204 00:11:18.190 --> 00:11:21.910 cybersecurity space about this concept of zero trust, and it's 205 00:11:22.390 --> 00:11:25.870 a zippy term, with a bit of an arcane meaning in the 206 00:11:25.870 --> 00:11:28.330 enterprise. Let's just assume that people aren't trusted, and 207 00:11:28.330 --> 00:11:31.300 we want to verify who they are and their permissions each time 208 00:11:31.300 --> 00:11:34.330 before we let them access something. But I think zero 209 00:11:34.330 --> 00:11:37.630 trust is about to head to a bit of a darker meaning, which is, 210 00:11:37.870 --> 00:11:41.770 in a world where generative AI is advancing so quickly, we're 211 00:11:41.770 --> 00:11:45.070 soon not going to be able to trust any voice or photo or 212 00:11:45.070 --> 00:11:47.620 video that we see online. In fact, we're already seeing the 213 00:11:47.620 --> 00:11:50.140 technology being used to attack some remote identity 214 00:11:50.140 --> 00:11:55.870 verification systems these days. And at that point, the idea of, 215 00:11:57.010 --> 00:11:59.170 I guess what I would describe as proof of humanity, that is 216 00:11:59.170 --> 00:12:02.350 really a person at the other end of this and not an AI-driven bot 217 00:12:02.350 --> 00:12:06.010 that's just looking to scam somebody is going to become much 218 00:12:06.010 --> 00:12:11.920 more important. And so I think this is really going to be a 219 00:12:11.920 --> 00:12:15.190 significant challenge going forward, not just from a 220 00:12:15.190 --> 00:12:18.040 security perspective, but this broader question of, "What can 221 00:12:18.040 --> 00:12:20.290 we all trust as we're encountering different things 222 00:12:20.290 --> 00:12:20.800 online?" 223 00:12:22.970 --> 00:12:24.980 Mathew Schwartz: Trusted Identity systems, not to feed 224 00:12:24.980 --> 00:12:28.250 into like a softball here, but what can be meaningfully done, 225 00:12:28.250 --> 00:12:32.030 given the fact that we're all human. And as I was saying, in 226 00:12:32.030 --> 00:12:35.150 times when you're approached by droids and told that you've got 227 00:12:35.150 --> 00:12:39.050 to save the space princess, and we are, "Show me the way you do 228 00:12:39.050 --> 00:12:42.590 social engineering remains a huge threat you're getting under 229 00:12:42.590 --> 00:12:47.240 these protections that we would normally have for ourselves, and 230 00:12:47.270 --> 00:12:51.980 AI-enhanced trickery, it gives fraudsters more possibilities. 231 00:12:52.280 --> 00:12:57.080 Are there any anti-fraud controls that you see maybe 232 00:12:57.080 --> 00:13:01.550 hopefully, in the near term that could meaningfully intercept 233 00:13:01.760 --> 00:13:05.150 these sorts of social engineering attacks? I think we 234 00:13:05.150 --> 00:13:07.280 have to accept, there's always going to be some level of those. 235 00:13:07.520 --> 00:13:09.590 But what can be done, do you think? 236 00:13:09.570 --> 00:13:11.503 Jeremy Grant: Well, I think there's two things we 237 00:13:11.558 --> 00:13:14.982 highlighted in the blog. So the first is AI is getting good at 238 00:13:15.038 --> 00:13:18.462 spoofing a lot of things. And might even be able to spoof some 239 00:13:18.517 --> 00:13:21.776 biometric systems. But the one thing that AI is not able to 240 00:13:21.831 --> 00:13:25.090 spoof is asymmetric public key cryptography. The idea of an 241 00:13:25.145 --> 00:13:28.404 identity that is bound to a private key, stored securely in 242 00:13:28.459 --> 00:13:31.994 hardware being in your device or a standalone token is something 243 00:13:32.049 --> 00:13:35.253 that I think is going to become much more important, going 244 00:13:35.308 --> 00:13:38.732 forward. Because it will be the one thing that look so much of 245 00:13:38.788 --> 00:13:41.825 biometrics and just everything else we're looking at is 246 00:13:41.881 --> 00:13:44.863 predictive in terms of we're analyzing a bunch of data 247 00:13:44.919 --> 00:13:48.177 points, it seems like it's you or it seems like it's a real 248 00:13:48.232 --> 00:13:51.491 person on the other end, but possession of a private key is 249 00:13:51.546 --> 00:13:55.081 actually a determinative factor. And so I think that will become 250 00:13:55.137 --> 00:13:58.561 more important going on because AI cannot spoof that, at least 251 00:13:58.616 --> 00:14:02.096 until we marry AI with quantum computing in about 10 years, and 252 00:14:02.151 --> 00:14:05.631 then we'll all be bowing down to the machines - that might be a 253 00:14:05.686 --> 00:14:08.890 good time for me to retire. The second thing we flagged is 254 00:14:08.945 --> 00:14:11.762 actually AI itself, which is that a lot of the same 255 00:14:11.817 --> 00:14:15.297 technology that can be used to launch these attacks can also be 256 00:14:15.352 --> 00:14:18.666 used to detect them. And in fact, a lot of what we're seeing 257 00:14:18.721 --> 00:14:21.980 in security these days, is a model where we're increasingly 258 00:14:22.035 --> 00:14:25.349 reliant on being able to ingest data from a lot of different 259 00:14:25.404 --> 00:14:28.497 sources sort of about the machine about the transaction, 260 00:14:28.552 --> 00:14:31.922 about things that we're seeing, and be able to analyze it for 261 00:14:31.977 --> 00:14:35.180 potential anomalies that show that there's actually a risk 262 00:14:35.236 --> 00:14:38.494 that something fishy is going on. And so I will say I'm not 263 00:14:38.550 --> 00:14:41.974 totally pessimistic about AI, but I do think it's a case where 264 00:14:42.029 --> 00:14:44.460 we'll actually need to fight fire with fire. 265 00:14:45.750 --> 00:14:48.090 Mathew Schwartz: Fantastic. Well, fighting fire with fire. 266 00:14:48.090 --> 00:14:49.320 Anna, over to you. 267 00:14:49.660 --> 00:14:53.470 Anna Delaney: Very good. Well, Jeremy, the U.S. Cyber Safety 268 00:14:53.470 --> 00:14:56.290 Review Board recently released a report into lapses and 269 00:14:56.290 --> 00:14:59.650 now-defunct adolescent hacking group that amassed some 270 00:14:59.710 --> 00:15:03.400 multibillion dollar and multinational victims, such as 271 00:15:03.430 --> 00:15:07.600 NVIDIA, Uber and Rockstar Games. The data theft and extortion 272 00:15:07.600 --> 00:15:11.770 gang, the review board said, use primarily simple techniques like 273 00:15:11.770 --> 00:15:15.040 stealing cell phone numbers and phishing employees to gain 274 00:15:15.040 --> 00:15:18.550 access to companies and their proprietary data. So, Jeremy, 275 00:15:18.550 --> 00:15:21.730 what are the top digital identity takeaways you'd 276 00:15:21.730 --> 00:15:22.870 highlight from this report? 277 00:15:22.000 --> 00:15:22.870 Jeremy Grant: So I think there were two, and I happen to have 278 00:15:23.440 --> 00:15:27.190 the report here, I think, because I was going through it 279 00:15:27.190 --> 00:15:29.530 again earlier today reviewing some of the findings with the 280 00:15:29.530 --> 00:15:34.840 clients. One, it gives a great report. I think the number one 281 00:15:34.840 --> 00:15:37.930 takeaway, the number one recommendation was that it needs 282 00:15:37.930 --> 00:15:41.470 to be a national priority here in the U.S. to take significant 283 00:15:41.470 --> 00:15:44.200 steps to accelerate the adoption of phishing-resistant 284 00:15:44.230 --> 00:15:46.390 authentication, ideally, passwordless, and they 285 00:15:46.390 --> 00:15:51.940 specifically pointed to FIDO standards. And FIDO passkeys as 286 00:15:51.940 --> 00:15:58.810 a core part of the solution, a big vulnerability that lapses, 287 00:15:58.810 --> 00:16:02.080 who, as you pointed out, were just a bunch of sharp teenagers, 288 00:16:02.290 --> 00:16:05.710 were able to exploit, is that a lot of the legacy multifactor 289 00:16:05.710 --> 00:16:08.890 authentication that we've rolled out over the last 10 years, be 290 00:16:08.890 --> 00:16:12.760 the SMS codes, be the one-time password apps, be they the push 291 00:16:12.760 --> 00:16:14.800 apps where you get a push notification, and it says, 292 00:16:14.830 --> 00:16:17.680 "Anna, you're trying to log in?" Yes, all of those are phishable. 293 00:16:17.680 --> 00:16:20.830 Now - and we see this all the time - we come up with a 294 00:16:20.830 --> 00:16:23.560 security innovation and help stop attacks for a few years, 295 00:16:23.560 --> 00:16:28.330 and then the attackers innovate and they catch up. And we are 296 00:16:28.330 --> 00:16:32.020 now seeing - it's not exactly new, in fact, Google back in 297 00:16:32.020 --> 00:16:35.800 2015 flagged that, hey, "If we can phish, if an attacker can 298 00:16:35.800 --> 00:16:38.770 phish a password from you, they can also phish that one-time 299 00:16:38.770 --> 00:16:43.570 passcode that you have." And so we need to start thinking about 300 00:16:43.570 --> 00:16:46.150 things that are more secure. But I think what we've seen from 301 00:16:46.150 --> 00:16:49.510 2015 is that was a new novel attack, now you're seeing a 302 00:16:49.510 --> 00:16:54.070 bunch of 17-year olds be able to do some serious damage with it. 303 00:16:54.250 --> 00:16:57.610 It really has to be a priority to shift to truly 304 00:16:57.640 --> 00:16:59.800 phishing-resistant authentication. And back to the 305 00:16:59.800 --> 00:17:02.410 point I was making about AI before, FIDO is based on 306 00:17:02.410 --> 00:17:05.170 possession of a private key, it is something that can't be 307 00:17:05.170 --> 00:17:09.520 phished. And so I think that's much more important. The second 308 00:17:09.520 --> 00:17:13.240 thing they also talked about in there is SIM swap attacks. And 309 00:17:13.240 --> 00:17:17.530 the challenges that we continue to see with the mobile carriers 310 00:17:17.830 --> 00:17:22.060 around guarding against SIM swap attacks, where, essentially, a 311 00:17:22.060 --> 00:17:25.300 scammer will go in and either convince the phone company or 312 00:17:25.300 --> 00:17:28.720 perhaps in some cases, bribe somebody who works in mobile 313 00:17:28.720 --> 00:17:34.060 store to essentially transfer your phone from one phone to 314 00:17:34.060 --> 00:17:37.030 another. And then if you're getting those SMS codes, well, 315 00:17:37.060 --> 00:17:40.090 now they have control of the phone that is used to get them. 316 00:17:40.540 --> 00:17:44.830 And they pointed out when we're doing syn, when we're doing 317 00:17:44.830 --> 00:17:47.950 transfers, which of course we want to enable for consumers, 318 00:17:48.550 --> 00:17:50.620 they might just be upgrading their phone into the same 319 00:17:50.620 --> 00:17:53.140 carrier, they might be looking to port from one carrier to 320 00:17:53.140 --> 00:17:56.560 another. There's a lot of opportunities there for people 321 00:17:56.560 --> 00:17:59.710 to spoof identities, and so they had some recommendations around 322 00:17:59.920 --> 00:18:03.610 strong identity verification and authentication that they think 323 00:18:03.610 --> 00:18:05.770 the carrier should look to implement. So I'd say 324 00:18:05.770 --> 00:18:10.240 collectively was like a lot of things in identity. These days, 325 00:18:10.240 --> 00:18:14.470 it was a very identity-centric report, both in terms of 326 00:18:14.470 --> 00:18:16.600 diagnosing how the attacks happened, but also in the 327 00:18:16.600 --> 00:18:18.280 recommendations for how to fix things. 328 00:18:19.560 --> 00:18:22.020 Anna Delaney: Does it feel Jeremy like you're having to 329 00:18:22.020 --> 00:18:25.140 make the same defensive recommendations time and time 330 00:18:25.140 --> 00:18:28.710 again? CISA director Jen Easterly use the report to call 331 00:18:28.710 --> 00:18:32.550 for more widespread use of MFA. But that's not a new message. 332 00:18:32.970 --> 00:18:34.410 Are we making progress here? 333 00:18:36.460 --> 00:18:38.650 Jeremy Grant: We're making progress. But I will say it is a 334 00:18:38.650 --> 00:18:44.560 little tiresome at times, saying the same thing year after year. 335 00:18:45.580 --> 00:18:49.990 Although I feel like if I was saying this 10 years ago, I 336 00:18:49.990 --> 00:18:52.870 might have been the only one in the room, and saying and a lot 337 00:18:52.870 --> 00:18:55.210 of people were asking, "Why is it that we really need this 338 00:18:55.210 --> 00:18:59.350 stuff?" Whereas now, as you pointed out, the director of 339 00:18:59.440 --> 00:19:04.450 CISA Jen Easterly is personally leading the campaign to try to 340 00:19:04.450 --> 00:19:09.460 get the private sector to implement this technology. So I 341 00:19:09.460 --> 00:19:11.950 think we're definitely making progress. I think key leaders 342 00:19:11.950 --> 00:19:15.010 and decision makers get it right now. I think we're actually 343 00:19:15.010 --> 00:19:17.440 making really good progress on the authentication side in 344 00:19:17.440 --> 00:19:20.980 general, I think, with the increasing ubiquity of things 345 00:19:20.980 --> 00:19:24.580 like FIDO Authentication, we know how to fix this, we sort of 346 00:19:24.580 --> 00:19:27.730 know how this ends, I still think adoption is going to take 347 00:19:27.730 --> 00:19:30.130 years just because there's always a gap between when you 348 00:19:30.130 --> 00:19:32.590 have an innovation and then when we can actually get into 349 00:19:32.590 --> 00:19:36.340 consumers and businesses hands. And there is a challenge in 350 00:19:36.340 --> 00:19:40.420 that. How would I say this? Your average consumer is used to 351 00:19:40.420 --> 00:19:43.240 having a password. And if you actually let them go without a 352 00:19:43.240 --> 00:19:45.340 password, they might think you're making them less secure, 353 00:19:45.340 --> 00:19:47.320 even if you're really making their life simpler and making 354 00:19:47.320 --> 00:19:50.080 them more secure. And so I think there's some usability 355 00:19:50.080 --> 00:19:53.530 challenges that still have to be overcome. The identity proofing 356 00:19:53.530 --> 00:19:56.050 side, I don't feel like we're there yet. I mean, the fact that 357 00:19:56.050 --> 00:19:59.830 we don't have a comprehensive national approach there the way 358 00:19:59.830 --> 00:20:03.280 we do in the authentication space. The fact that we had this 359 00:20:03.280 --> 00:20:06.070 episode earlier this month where it was left out of the 360 00:20:06.070 --> 00:20:09.010 implementation plan for the National Cybersecurity Strategy. 361 00:20:09.250 --> 00:20:11.980 I think that's a new frontier of where more work is going to be 362 00:20:11.980 --> 00:20:17.650 needed. And where there is a gap right now between where security 363 00:20:17.650 --> 00:20:20.260 experts are. And we're, I'd say more broadly, a lot of 364 00:20:20.260 --> 00:20:23.560 industries in terms of the solutions that's needed, and 365 00:20:23.590 --> 00:20:26.710 seen government leadership willing to step up and help to 366 00:20:26.710 --> 00:20:28.240 accelerate the process to get us there. 367 00:20:29.800 --> 00:20:31.540 Anna Delaney: All right, good. Well, thank you, Jeremy. One 368 00:20:31.540 --> 00:20:35.440 final question for you, if I may, just for fun carrying on 369 00:20:35.440 --> 00:20:38.950 with the Star Wars goodness, I'd like you to either share 370 00:20:38.950 --> 00:20:42.580 something from Star Wars that is particularly great or awful when 371 00:20:42.580 --> 00:20:46.480 it comes to illustrating security concepts, or devise a 372 00:20:46.480 --> 00:20:49.540 new Star Wars character, which does the same. Jeremy I'll give 373 00:20:49.540 --> 00:20:54.310 you a break for a moment. These experts in the room. Tom, go for 374 00:20:54.310 --> 00:20:54.490 it. 375 00:20:55.110 --> 00:20:58.020 Tom Field: Last night, I happened to speak to a CISO at a 376 00:20:58.020 --> 00:21:01.110 university medical center. And she talked about having just 377 00:21:01.110 --> 00:21:05.220 deployed a new solution that helps detect the use of 378 00:21:05.220 --> 00:21:08.820 generative AI in content. I thought what a terrific solution 379 00:21:08.820 --> 00:21:12.360 that is particularly the university. So my Star Wars 380 00:21:12.360 --> 00:21:16.500 character is a droid that goes out seeking the use of 381 00:21:16.500 --> 00:21:19.950 generative AI, I'm going to call it GPT3 though. 382 00:21:21.930 --> 00:21:23.730 Mathew Schwartz: Is it a killer droid? Is it a killer droid, 383 00:21:23.730 --> 00:21:24.990 Tom? Is it all black? 384 00:21:25.110 --> 00:21:26.880 Tom Field: Depends upon the content that gets found. 385 00:21:29.190 --> 00:21:30.930 Anna Delaney: Very good. Love that. 386 00:21:30.000 --> 00:21:34.830 Mathew Schwartz: I love the lack of compartmentalization by the 387 00:21:34.830 --> 00:21:39.330 rebel force that tries to pick some frozen planet somewhere and 388 00:21:39.330 --> 00:21:42.690 just say, "Hey, probably nobody will find us. But if they do, 389 00:21:42.720 --> 00:21:45.570 oops, a lot of people are going to die." But I especially love 390 00:21:45.570 --> 00:21:49.890 the ability to steal a piece of equipment, say, Imperial 391 00:21:49.890 --> 00:21:54.270 transport shuttle, come up with the code that, "the shuttle 392 00:21:54.270 --> 00:21:58.080 looks a little old." All your codes outdated? Does the Imperia 393 00:21:58.080 --> 00:22:03.480 use step-up authentication? Does it seek a second or third form 394 00:22:03.480 --> 00:22:08.280 of verification? Does it say "Uh huh"? Don't think your engines 395 00:22:08.280 --> 00:22:11.100 are too dry? We're going to have a closer look here. They do not. 396 00:22:11.340 --> 00:22:14.100 So that's one of the little things is jumped out at me from 397 00:22:14.100 --> 00:22:14.700 time to time. 398 00:22:15.420 --> 00:22:17.490 Tom Field: There is no try authenticate. 399 00:22:20.000 --> 00:22:24.920 Anna Delaney: Well, my character is Jedi Sentinel, whose mission 400 00:22:24.920 --> 00:22:28.520 from an early age has been to safeguard online identities, and 401 00:22:28.520 --> 00:22:33.680 part of a superpower is her data shielding aura. So she emits an 402 00:22:33.680 --> 00:22:36.890 electromagnetic shield that envelops users within her 403 00:22:36.890 --> 00:22:40.160 vicinity, safeguarding their personal data and sensitive 404 00:22:40.160 --> 00:22:44.540 information from breaches. So I wonder if George Lucas has 405 00:22:44.540 --> 00:22:46.430 convinced Jeremy. 406 00:22:46.000 --> 00:22:50.110 Jeremy Grant: I was going to go with a semi-competent CISA for 407 00:22:50.110 --> 00:22:55.450 the Empire. I mean, the number of times that our R2-D2, while a 408 00:22:55.450 --> 00:23:01.990 lovable droid, I think is just a pretty standard issue droid, in 409 00:23:01.990 --> 00:23:08.020 that world can just walk up or I guess we'll, up to some sort of 410 00:23:08.020 --> 00:23:12.520 empire system, stick a little probe in and instantly access it 411 00:23:12.520 --> 00:23:17.140 and basically take the system over. This is an empire that for 412 00:23:17.140 --> 00:23:20.140 whatever they were able to do in terms of sheer power and fear 413 00:23:20.140 --> 00:23:24.100 and intimidation, consistently had some of the crappiest access 414 00:23:24.100 --> 00:23:27.460 control and cybersecurity practices that you could ever 415 00:23:27.460 --> 00:23:31.690 imagine. And it would have been a much quicker movie or set of 416 00:23:31.690 --> 00:23:34.240 movies, I think if they just had a competent system on the back 417 00:23:34.240 --> 00:23:38.470 end, which drives home the point which is that CEOs and boards 418 00:23:38.860 --> 00:23:40.960 keep ignoring information security, and they don't 419 00:23:40.960 --> 00:23:44.050 actually empower people where it counts. So they got what they 420 00:23:44.050 --> 00:23:44.470 deserved. 421 00:23:46.160 --> 00:23:48.050 Anna Delaney: Well said. Very true, and I'm looking forward to 422 00:23:48.050 --> 00:23:54.170 the next film. Jeremy, thank you so much. This has been immensely 423 00:23:54.170 --> 00:23:58.310 fun and very informative, as always, so we thank you. 424 00:23:58.760 --> 00:23:59.540 Jeremy Kirk: Thank you for having me. 425 00:23:59.750 --> 00:24:00.590 Tom Field: Thanks, Jeremy. 426 00:24:02.180 --> 00:24:04.160 Anna Delaney: And thanks so much for watching. Until next time.