WEBVTT 1 00:00:07.170 --> 00:00:09.450 Anna Delaney: Hello, I'm Anna Delaney, and welcome to the ISMG 2 00:00:09.450 --> 00:00:13.110 Editors' Panel - a weekly spot where ISMG journalists examine 3 00:00:13.110 --> 00:00:16.380 and discuss the latest cybersecurity news, trends and 4 00:00:16.380 --> 00:00:19.770 events. I'm joined today by my brilliant colleagues, Tony 5 00:00:19.800 --> 00:00:22.980 Morbin, executive news editor for the EU; Mathew Schwartz, 6 00:00:23.010 --> 00:00:25.710 executive editor of DataBreachToday and Europe; and 7 00:00:25.710 --> 00:00:28.710 master and commander, Tom Field, senior vice president of 8 00:00:28.710 --> 00:00:31.080 editorial. Good to see you all. 9 00:00:31.950 --> 00:00:32.670 Tony Morbin: Good to see you, Anna. 10 00:00:33.090 --> 00:00:33.660 Tom Field: Thanks for having me. 11 00:00:33.660 --> 00:00:33.990 Mathew Schwartz: Glad to be here. 12 00:00:34.740 --> 00:00:36.780 Anna Delaney: Tony, are you in the void? What's happening 13 00:00:36.780 --> 00:00:37.110 there? 14 00:00:38.310 --> 00:00:42.060 Tony Morbin: This is HAL 2001: A Space Odyssey, one of the 15 00:00:42.450 --> 00:00:46.140 unintended consequences of AI. So looking at that. 16 00:00:46.830 --> 00:00:49.560 Anna Delaney: Looking forward to that. Tom, what a sky again? 17 00:00:50.010 --> 00:00:52.830 Tom Field: Yeah, I'm about to join HAL up in space. This is a 18 00:00:52.830 --> 00:00:56.970 beautiful sunset at my regional airport where I fly in and out. 19 00:00:56.970 --> 00:01:01.320 And I would have to say that my aeroplane is not quite the same 20 00:01:01.320 --> 00:01:05.760 as the shuttle that reached the spaceship in 2001. But It 21 00:01:05.760 --> 00:01:06.420 aspires. 22 00:01:07.740 --> 00:01:10.260 Anna Delaney: And Mat, do we have the urban version of HAL? 23 00:01:11.130 --> 00:01:13.860 Mathew Schwartz: Old school transport here. The steam train 24 00:01:13.950 --> 00:01:17.940 in Leith, which is in the north part of Edinburgh, Scotland's 25 00:01:17.940 --> 00:01:20.700 capital city, where I've been spending some time recently, and 26 00:01:20.700 --> 00:01:23.940 I just love this street art. 27 00:01:24.600 --> 00:01:29.880 Anna Delaney: Love it - from train to tank. There we go. As 28 00:01:29.880 --> 00:01:32.970 you know, I'm in Stockholm today to host a roundtable this 29 00:01:32.970 --> 00:01:36.780 evening, and I took this pic of the Army Museum, which is not 30 00:01:36.780 --> 00:01:41.130 too far from my hotel. And as early apparently as the 17th 31 00:01:41.130 --> 00:01:44.580 century, this building was used to store artillery pieces and 32 00:01:44.580 --> 00:01:48.510 weaponry. And I just love these railings, just check them out. 33 00:01:50.220 --> 00:01:51.150 Only the Vikings. 34 00:01:51.930 --> 00:01:52.590 Tom Field: Are those bass? 35 00:01:55.050 --> 00:01:58.050 Anna Delaney: Almost, yeah. Well, Tom, you conducted a very 36 00:01:58.050 --> 00:02:01.110 interesting interview recently on keeping assets secure in the 37 00:02:01.110 --> 00:02:05.100 quantum era and were introduced to a new term, are you not? 38 00:02:06.120 --> 00:02:09.390 Tom Field: Indeed, the quantum divide. And I got this from 39 00:02:09.390 --> 00:02:11.730 sitting down having a conversation at Black Hat 40 00:02:11.730 --> 00:02:15.660 recently with Jen Sovada. She's a retired U.S. Air Force colonel 41 00:02:15.660 --> 00:02:19.470 and currently the president, global public sector at the 42 00:02:19.470 --> 00:02:24.900 company SandboxAQ. And so we talked about the digital divide. 43 00:02:24.900 --> 00:02:27.720 And you know, I asked her, okay, not the digital divide but the 44 00:02:27.720 --> 00:02:29.790 quantum divide because we've heard of the digital divide, we 45 00:02:29.790 --> 00:02:33.720 talk about that a lot. But in this era of quantum computing, 46 00:02:33.900 --> 00:02:37.410 what does a quantum divide mean, and why should we be concerned 47 00:02:37.410 --> 00:02:39.750 about it? So I'll talk a bit about that. But first, let me 48 00:02:39.750 --> 00:02:42.990 just show you a clip of my discussion with her, where she 49 00:02:42.990 --> 00:02:45.570 defines this for us and gives us a little bit of context. 50 00:02:45.810 --> 00:02:48.090 Jen Sovada: So interestingly enough, we've had this thing 51 00:02:48.090 --> 00:02:51.390 called the digital divide for a very long time. And the quantum 52 00:02:51.390 --> 00:02:55.950 divide is a similar topic. It really identifies the 18 53 00:02:55.950 --> 00:02:59.070 countries that have quantum programs, quantum technology, 54 00:02:59.070 --> 00:03:02.220 quantum ecosystems and strategies. And the rest of the 55 00:03:02.220 --> 00:03:06.630 world right now does not have either an ecosystem, a plan to 56 00:03:06.630 --> 00:03:10.050 get their money invested. And so it's creating this haves and 57 00:03:10.050 --> 00:03:13.110 have nots within the quantum world. And the way that we see 58 00:03:13.110 --> 00:03:17.190 it manifesting is in things like making sure that we're protected 59 00:03:17.220 --> 00:03:21.570 against quantum attack, making sure that those that need new 60 00:03:21.600 --> 00:03:24.270 emerging technology, like quantum technology, have access 61 00:03:24.270 --> 00:03:26.310 to it, whether it's in the cloud, or whether it's a quantum 62 00:03:26.310 --> 00:03:28.740 computer when quantum computers are available, or even 63 00:03:28.740 --> 00:03:31.350 technology, like quantum navigation for safety of flight, 64 00:03:31.680 --> 00:03:36.060 or for medical devices, so that we can continue to have better 65 00:03:36.060 --> 00:03:37.740 welfare around the world. 66 00:03:37.860 --> 00:03:39.930 Tom Field: Okay, I'll give you that as a teaser then, the 67 00:03:39.930 --> 00:03:43.410 quantum divide. Now she talks about 18 countries, but what we 68 00:03:43.410 --> 00:03:47.070 really need to talk about are Russia and China. They're among 69 00:03:47.070 --> 00:03:50.760 the countries that are spending an estimated 36 billion on 70 00:03:50.760 --> 00:03:54.750 quantum information science in this year alone. And what their 71 00:03:54.750 --> 00:03:59.070 focus is, is on breaking traditional encryption, and 72 00:03:59.070 --> 00:04:02.610 safeguarding their own data. So you see many hacks of encrypted 73 00:04:02.610 --> 00:04:06.780 files today being done with a store now, decrypt later 74 00:04:06.780 --> 00:04:09.210 strategies that will have this information we can get into it, 75 00:04:09.210 --> 00:04:11.970 we will, you got to be concerned about that. And that's one of 76 00:04:11.970 --> 00:04:15.360 the reasons you need to prepare now and to bridge this gap. 77 00:04:17.160 --> 00:04:20.370 Anna Delaney: Well, exactly, but the one question that's asked 78 00:04:20.460 --> 00:04:23.460 time and time again, why should we be focused on this? Now 79 00:04:23.460 --> 00:04:26.010 there's so many other things to be focused on, this is 80 00:04:26.340 --> 00:04:29.580 potentially a future threat. So how do we as journalists go 81 00:04:29.580 --> 00:04:31.830 about answering that? What's the immediate risk? 82 00:04:31.000 --> 00:04:34.480 Tom Field: The quantum has been a future threat forever, but I 83 00:04:34.480 --> 00:04:37.810 think we are getting closer. And as people start saying, now 84 00:04:37.810 --> 00:04:42.010 quantum is 10 years out, it's five years out, that time period 85 00:04:42.010 --> 00:04:46.450 is getting shorter, but I think that you are not to be Mr., you 86 00:04:46.450 --> 00:04:50.200 know, fear, uncertainty and doubt here but the thing I would 87 00:04:50.200 --> 00:04:53.800 point out is that Russia is very much partnered with China on 88 00:04:53.800 --> 00:04:56.560 quantum right now. We recognize that China is well ahead of 89 00:04:56.560 --> 00:05:01.420 what's happening in the U.S. and U.K. and EU. And we just aren't 90 00:05:01.420 --> 00:05:05.470 developing at the same speed. And Russia's Vladimir Putin was 91 00:05:05.470 --> 00:05:09.880 quoted recently as saying, we need to have a quantum Manhattan 92 00:05:09.880 --> 00:05:13.600 Project, very similar to what happened in the 1940s with the 93 00:05:13.600 --> 00:05:16.780 atomic bomb. To me, that's recent enough to be concerned. 94 00:05:17.940 --> 00:05:21.060 Mathew Schwartz: Huge geopolitical potential here for 95 00:05:21.060 --> 00:05:24.360 things to go wrong. On the flip side, I've also heard some 96 00:05:24.360 --> 00:05:29.280 experts advising that on an intellectual property front, for 97 00:05:29.280 --> 00:05:34.890 example, if quantum breakage, if the ability to wreck encryption 98 00:05:34.890 --> 00:05:38.190 using quantum computers does happen within a certain 99 00:05:38.190 --> 00:05:42.210 timeframe, could it affect you? And so for some organizations, 100 00:05:42.840 --> 00:05:46.830 two, three years, whatever secrets they're storing, may no 101 00:05:46.830 --> 00:05:49.680 longer be secret, or may no longer be valuable. They don't 102 00:05:49.680 --> 00:05:52.740 need to worry about it so much. When it comes to government 103 00:05:52.740 --> 00:05:56.220 matters though spycraft, that sort of thing that starts to get 104 00:05:56.220 --> 00:05:57.000 really worrying. 105 00:05:57.540 --> 00:06:00.270 Tom Field: It does. It's a time to be quantum-proof as one would 106 00:06:00.270 --> 00:06:03.900 say. So it to me, I don't want to use the term wake-up call 107 00:06:04.260 --> 00:06:06.540 because that should have happened a long time ago. But 108 00:06:06.540 --> 00:06:08.910 it's something to be aware of. I think it's something that 109 00:06:08.910 --> 00:06:11.700 organizations need to be spending some more time thinking 110 00:06:11.700 --> 00:06:16.290 about, because what's protecting us today is not going to protect 111 00:06:16.290 --> 00:06:19.290 us tomorrow, no matter when that tomorrow happens to arrive. 112 00:06:20.310 --> 00:06:22.230 Tony Morbin: Yeah, because some secrets, you know, you just 113 00:06:22.230 --> 00:06:24.720 can't change where your secret bases are. They're going to be 114 00:06:24.720 --> 00:06:28.020 the same in the future as they are now. Well, the recipe to 115 00:06:28.020 --> 00:06:28.740 Coca-Cola. 116 00:06:29.310 --> 00:06:32.310 Anna Delaney: Yeah. Well, my roundtable this evening is about 117 00:06:32.580 --> 00:06:36.090 "Post-Quantum Cryptography: Are You Ready?" So thank you for 118 00:06:36.090 --> 00:06:39.570 that background. It's proven useful, and I'll report back 119 00:06:39.570 --> 00:06:43.530 with nuggets next week. Well, Mat, it's not often we discuss 120 00:06:43.560 --> 00:06:47.430 usernames because we're too busy talking about weak passwords. 121 00:06:47.730 --> 00:06:50.370 But we're bucking the trend this week, as you asked when do 122 00:06:50.370 --> 00:06:54.030 common usernames pose a threat? So have you got the answer to 123 00:06:54.030 --> 00:06:54.630 that question? 124 00:06:55.560 --> 00:06:58.290 Mathew Schwartz: I got a great answer to that question. And I 125 00:06:58.290 --> 00:07:01.200 don't want to seem boring here by talking about usernames. 126 00:07:01.260 --> 00:07:05.640 Something about passwords seems a lot sexier. And we know from 127 00:07:05.670 --> 00:07:10.530 repeat dumps of people's passwords that as humans, we're 128 00:07:10.530 --> 00:07:14.010 predisposed to do the easy thing. And so a lot of people's 129 00:07:14.010 --> 00:07:17.160 passwords are horrible. If it's not their pet's name, it's 130 00:07:17.280 --> 00:07:23.280 12345. Or it is literally password, or my password, or my 131 00:07:23.280 --> 00:07:28.260 favorite in IT environments, admin admin, as in username, 132 00:07:28.260 --> 00:07:32.490 admin, password admin, or oftentimes just a blank field 133 00:07:32.670 --> 00:07:36.270 for the password. So there's a lot to be concerned about there. 134 00:07:36.510 --> 00:07:39.390 But there was some really interesting research that got 135 00:07:39.390 --> 00:07:45.390 published recently, by the CISO, Jesse La Grew, CISO of Madison 136 00:07:45.390 --> 00:07:48.990 College in Wisconsin. He works with the SANS Institute's 137 00:07:49.020 --> 00:07:53.040 Internet Storm Center, and they work with a lot of honeypots and 138 00:07:53.130 --> 00:07:58.530 see what's going on. And Jesse published a blog post, basically 139 00:07:58.530 --> 00:08:04.590 rounding up 16 months of honeypot results. And he was 140 00:08:04.590 --> 00:08:10.770 seeing a lot of attempts to brute force attack SSH. And you 141 00:08:10.770 --> 00:08:14.220 know this because the username that was being attempted is 142 00:08:14.280 --> 00:08:17.340 root, which is basically what you attempt to do if you're 143 00:08:17.340 --> 00:08:23.040 trying to get remote access to a Linux system via SSH. And so he 144 00:08:23.070 --> 00:08:26.430 published this table of all the different information. About 145 00:08:26.430 --> 00:08:30.420 half the time, the honeypot was seeing attack attempts using 146 00:08:30.420 --> 00:08:34.710 root, after that was admin, which is the default for Windows 147 00:08:34.710 --> 00:08:38.760 remote access. That was in about 4% of cases. After that you were 148 00:08:38.760 --> 00:08:43.380 seeing things like user, test, Ubuntu, Oracle, and another 149 00:08:43.380 --> 00:08:47.940 favorite of mine, FTP user, which should just strike horror 150 00:08:47.940 --> 00:08:51.360 into everybody's hearts because there could still be live FTP 151 00:08:51.360 --> 00:08:54.330 systems online. Few years back, there were 21 million I think 152 00:08:54.330 --> 00:08:58.320 FTP systems still connected to the internet. And it might also 153 00:08:58.320 --> 00:09:01.230 be someone else's username for a different system, you just don't 154 00:09:01.230 --> 00:09:05.280 know. So this isn't rocket science. But I thought this was 155 00:09:05.280 --> 00:09:07.980 a great reminder that it's hackers who are attempting to 156 00:09:07.980 --> 00:09:12.480 exploit things that give them easy access. So if we're seeing 157 00:09:12.480 --> 00:09:17.280 this in the wild, it means most likely that it's working for at 158 00:09:17.280 --> 00:09:20.970 least some percentage of attacks - it might be a really small 159 00:09:20.970 --> 00:09:24.330 percentage. But of course, you don't want to be that percentage 160 00:09:24.390 --> 00:09:28.860 at your organization. So this is a great reminder, if you have a 161 00:09:28.890 --> 00:09:34.140 remotely accessible Linux system for which root is the username. 162 00:09:34.980 --> 00:09:38.760 I spoke with experts and I said, does this in and of itself pose 163 00:09:38.760 --> 00:09:43.170 a security risk? And the consensus is, No. It's also very 164 00:09:43.170 --> 00:09:46.650 difficult to get rid of these root services. There's lots of 165 00:09:46.650 --> 00:09:49.350 different ways they can be root not just with the name "root," 166 00:09:49.530 --> 00:09:53.070 and people are going to know what they are. So you need to 167 00:09:53.070 --> 00:09:57.060 have strong passwords. Even better, Johannes Ullrich, who's 168 00:09:57.060 --> 00:10:00.270 the CTO of the SANS Institute, told me you want to be you using 169 00:10:00.300 --> 00:10:03.150 a cryptographic approach. You want cryptographic keys for 170 00:10:03.150 --> 00:10:07.680 these things to make it really difficult for somebody to 171 00:10:07.710 --> 00:10:11.400 attempt to crack it. When you're using strong passwords, you've 172 00:10:11.400 --> 00:10:14.880 got to have MFA. And we know that multi-factor authentication 173 00:10:14.940 --> 00:10:19.950 can be bypassed, but it makes things much more difficult for 174 00:10:19.950 --> 00:10:24.060 attackers. And if for some reason, there is a really simple 175 00:10:24.060 --> 00:10:28.320 or easily guessable password, or the password gets dumped, or the 176 00:10:28.320 --> 00:10:31.620 password gets stolen via a social engineering attack, which 177 00:10:31.620 --> 00:10:35.220 continues to happen with unfortunate regularity, then at 178 00:10:35.220 --> 00:10:37.620 least with multi-factor authentication, you will be 179 00:10:37.620 --> 00:10:41.130 blocking the attack. The cybersecurity and infrastructure 180 00:10:41.130 --> 00:10:43.830 security agency in the U.S. has been making a lot of noise about 181 00:10:43.830 --> 00:10:49.560 MFA for precisely this reason. This sounds really basic, but we 182 00:10:49.560 --> 00:10:53.040 still continue to see too many breaches happening because we 183 00:10:53.040 --> 00:10:58.170 don't have these sorts of defenses in place. So usernames, 184 00:10:58.230 --> 00:11:00.420 if you've got root, if you've got admin, you've got user, 185 00:11:00.420 --> 00:11:03.750 you've got test, think about disabling remote access for 186 00:11:03.750 --> 00:11:08.820 these. Think about assigning usernames to users to actual 187 00:11:08.820 --> 00:11:13.020 users, especially where administrative-level tasks are 188 00:11:13.020 --> 00:11:16.710 concerned. Another thing I heard from the experts I spoke to is 189 00:11:16.710 --> 00:11:20.100 that this is really important. From an auditing standpoint, you 190 00:11:20.100 --> 00:11:23.790 want to have granularity about who's doing what and when 191 00:11:24.240 --> 00:11:27.420 doesn't mean it might not be a malicious insider, doesn't mean 192 00:11:27.420 --> 00:11:30.420 it might not be a hacker who managed to gain access to this 193 00:11:30.420 --> 00:11:35.220 account. Too often when we see a breach in organization isn't 194 00:11:35.220 --> 00:11:38.340 clear what happened or when. Doesn't know if data got 195 00:11:38.340 --> 00:11:43.080 exposed, looked at, copied, altered. When you've got better 196 00:11:43.080 --> 00:11:48.060 granularity with user accounts, usernames, in particular, it'll 197 00:11:48.060 --> 00:11:51.720 help incident responders figure out what happened. So again, 198 00:11:51.810 --> 00:11:54.510 none of this is earth shattering. I did think it was 199 00:11:54.510 --> 00:11:57.540 fascinating that so many attackers are attempting to 200 00:11:57.930 --> 00:12:01.680 remotely hack in via root. And it's a great reminder again, 201 00:12:01.680 --> 00:12:04.230 just to pay attention to those basics. 202 00:12:05.880 --> 00:12:10.050 Anna Delaney: And as you said alarmingly many FTP servers are 203 00:12:10.050 --> 00:12:12.690 still internet connected. So what are the risks of these 204 00:12:12.720 --> 00:12:16.590 servers with the usernames and passwords still accessible 205 00:12:16.590 --> 00:12:17.070 online? 206 00:12:17.460 --> 00:12:19.740 Mathew Schwartz: Well, the big risk with FTP is if you're still 207 00:12:19.740 --> 00:12:23.430 using FTP, what other skeletons do you have in your closet, 208 00:12:23.610 --> 00:12:27.510 because FTP should not be internet connected. Lots of 209 00:12:27.510 --> 00:12:31.590 other protocols like Telnet should not be used at all. They 210 00:12:31.590 --> 00:12:36.120 should be blocked outright, because it cannot be used for 211 00:12:36.120 --> 00:12:42.240 encrypted communications. SSH can be so if you see FTP, you 212 00:12:42.240 --> 00:12:47.610 have to wonder what are the old easily hacked protocols, old 213 00:12:47.640 --> 00:12:51.060 easily hacked usernames and simple passwords are under the 214 00:12:51.060 --> 00:12:54.000 hood there, in that IT infrastructure. That's a real 215 00:12:54.000 --> 00:12:54.690 warning sign. 216 00:12:56.190 --> 00:12:59.490 Anna Delaney: Mat, just finally, briefly, is there any merit in 217 00:12:59.490 --> 00:13:03.750 attempting to enhance security by using the non-standard or 218 00:13:03.750 --> 00:13:08.400 unusual usernames? Or is it just better to focus on, you know, 219 00:13:08.400 --> 00:13:09.510 strong credentials? 220 00:13:10.290 --> 00:13:12.510 Mathew Schwartz: That's a great question. I wondered if it's 221 00:13:12.660 --> 00:13:16.260 obvious how an organization puts this sort of stuff together. 222 00:13:16.260 --> 00:13:19.680 Like if it's John Smith is an employee at Acme so you know, 223 00:13:19.680 --> 00:13:23.880 it's going to be J. Smith or J. Smith01, 02 or whatever? Does it 224 00:13:23.880 --> 00:13:26.400 behoove us to try to do something a little more Trixie 225 00:13:26.640 --> 00:13:30.150 and throw in a bunch of random characters or something? And the 226 00:13:30.150 --> 00:13:34.710 security experts I spoke to said, No. They completely shut 227 00:13:34.710 --> 00:13:37.320 me down. They didn't even say that's nice in theory. 228 00:13:37.650 --> 00:13:40.230 Basically, they said it wasn't worth the effort. You need to 229 00:13:40.230 --> 00:13:43.170 have some sort of consistency with how you administer these 230 00:13:43.170 --> 00:13:48.270 things. And if you're looking for obscurity, security via 231 00:13:48.270 --> 00:13:51.570 obscurity with your usernames, then I think the battle is 232 00:13:51.570 --> 00:13:54.420 already lost. You really need to be focusing on things that 233 00:13:54.480 --> 00:13:58.380 actually do something. Strong passwords, MFA, cryptographic 234 00:13:58.380 --> 00:14:03.690 keys for account access. So great question. I asked. 235 00:14:04.650 --> 00:14:07.860 Tom Field: Yeah, but we won't be seeing matpuffyschwartz anytime 236 00:14:07.860 --> 00:14:08.160 soon? 237 00:14:08.760 --> 00:14:11.790 Mathew Schwartz: I know. Yeah or dollar signs in there, you know, 238 00:14:11.880 --> 00:14:14.910 just to bling it up a little bit. Apparently not. 239 00:14:15.510 --> 00:14:17.430 Tony Morbin: I just got to throw in the one about, you know, when 240 00:14:17.430 --> 00:14:21.000 asked for a strong eight-character password, the 241 00:14:21.000 --> 00:14:23.310 solution was, you know, Snow White and the Seven Dwarfs. 242 00:14:26.010 --> 00:14:29.940 Anna Delaney: Very good. Well, on to you, Tony. You're covering 243 00:14:29.940 --> 00:14:33.150 an obligatory AI spot this week. So what's unfolding that's 244 00:14:33.150 --> 00:14:34.020 caught your interest? 245 00:14:34.440 --> 00:14:38.580 Tony Morbin: You can't really avoid AI at all at the moment. 246 00:14:38.940 --> 00:14:42.720 And if you even just dabbled with ChatGPT or generative AI, 247 00:14:43.530 --> 00:14:46.800 any other generative articles on artificial intelligence, you'll 248 00:14:47.250 --> 00:14:50.850 have an inkling of the huge potential benefits. But if 249 00:14:50.850 --> 00:14:54.270 you've got no concerns about the potential risks, then you've 250 00:14:54.270 --> 00:14:58.110 just got no imagination. It's difficult to discern what are 251 00:14:58.110 --> 00:15:00.900 the real threats that we should be worried around what's just 252 00:15:00.900 --> 00:15:04.350 hype or doom mongering. But the consensus is that we do need to 253 00:15:04.350 --> 00:15:08.010 be careful about unintended consequences. So we find 254 00:15:08.010 --> 00:15:11.490 ourselves being urged to deploy generative AI immediately to 255 00:15:11.490 --> 00:15:15.480 avoid missing the boat, while simultaneously being advised to 256 00:15:15.480 --> 00:15:20.010 apply caution, guardrails, regulation. Now George Colony 257 00:15:20.010 --> 00:15:24.060 Forrester CEO, speaking during the company's 2023 North America 258 00:15:24.060 --> 00:15:27.270 technology and innovation event in Austin, Texas, this week, was 259 00:15:27.270 --> 00:15:31.290 unequivocal. He said, you must begin to embrace and engage with 260 00:15:31.290 --> 00:15:35.220 this technology now; not next month, not next quarter, 261 00:15:35.400 --> 00:15:39.540 definitely not next year. There was a more cautious tone in the 262 00:15:39.540 --> 00:15:44.310 comments from Amy Matsuo, KPMG, principal in a report published 263 00:15:44.310 --> 00:15:48.420 Tuesday, where she urges firms to focus on developing policies 264 00:15:48.420 --> 00:15:51.540 that manage how AI is used in the organization and by whom. 265 00:15:51.780 --> 00:15:54.660 Educate stakeholders on the emerging risks and appropriate 266 00:15:54.660 --> 00:15:58.050 use policies. Monitor regulatory developments, and ensure 267 00:15:58.050 --> 00:16:03.960 they're complied with. So regulation really is lacking 268 00:16:03.990 --> 00:16:08.580 behind the pace of AI use case deployment. Everyone says 269 00:16:08.610 --> 00:16:12.030 security would be built in if we launched the internet today. But 270 00:16:12.030 --> 00:16:15.360 the truth is, generative AI has been unleashed on the world in a 271 00:16:15.360 --> 00:16:18.570 freefall, with governments and organizations scrambling to 272 00:16:18.570 --> 00:16:22.440 catch up. In the U.S., the White House Chief of Staff, Jeff 273 00:16:22.440 --> 00:16:26.340 Zients says the President has been clear, harness the benefits 274 00:16:26.340 --> 00:16:31.350 of AI, manage the risks, and move fast, very fast. In 275 00:16:31.350 --> 00:16:33.840 practice, it appears that the U.S. industry is heeding the 276 00:16:33.840 --> 00:16:37.980 call to move fast more than the warning to manage the risks. And 277 00:16:37.980 --> 00:16:41.610 it seemed the EU is leading the race to set AI standards swelling 278 00:16:41.610 --> 00:16:44.460 down the route to implementing its cautious approach that would 279 00:16:44.460 --> 00:16:48.300 see outright bans for some applications. Its legislation 280 00:16:48.300 --> 00:16:52.440 which began in 2021 before the unveiling of ChatGPT has been 281 00:16:52.440 --> 00:16:55.200 passed in the EU Parliament, and it's in the process of being 282 00:16:55.200 --> 00:16:59.040 enacted. It classifies AI systems based on their risks 283 00:16:59.190 --> 00:17:02.040 with a list of banned applications including biometric 284 00:17:02.040 --> 00:17:06.030 identification systems in public accessible spaces, bulk scraping 285 00:17:06.030 --> 00:17:09.300 of images to create facial recognition databases, and 286 00:17:09.300 --> 00:17:12.690 systems that use physical traits such as gender or race or 287 00:17:12.690 --> 00:17:15.330 inferred attributes such as religious affiliation to 288 00:17:15.330 --> 00:17:19.380 categorize individuals. High-risk AI systems such as 289 00:17:19.380 --> 00:17:22.020 those used in critical infrastructure, law enforcement 290 00:17:22.140 --> 00:17:25.290 or the workplace would come under elevated requirements for 291 00:17:25.290 --> 00:17:28.980 registration, risk assessment and mitigation, as well as human 292 00:17:28.980 --> 00:17:33.720 oversight and process documentation. Now, the U.S. is 293 00:17:33.750 --> 00:17:37.470 a little further behind in its legislative program, but it does 294 00:17:37.470 --> 00:17:40.200 also envisage restrictions being introduced this year on 295 00:17:40.650 --> 00:17:45.450 high-risk applications. The proposed legislation does 296 00:17:45.450 --> 00:17:48.570 include a framework covering the licensing regime, legal 297 00:17:48.570 --> 00:17:52.500 liability for AI firms when their models breach privacy or 298 00:17:52.500 --> 00:17:56.010 violate civil rights or otherwise called harms. But in 299 00:17:56.010 --> 00:17:59.250 comparison to the EU, the U.S. is putting the emphasis on 300 00:17:59.250 --> 00:18:03.000 enabling creative exploitation of AI, ensuring that regulation 301 00:18:03.000 --> 00:18:07.230 doesn't hinder innovation. In the U.K., we're seeking a middle 302 00:18:07.230 --> 00:18:10.440 route for its legislation, but the reality is commentators 303 00:18:10.440 --> 00:18:13.530 expected to have to follow the rules of its dominant trade 304 00:18:13.530 --> 00:18:18.090 partner, the EU. In the U.S., ahead of its legislation, 305 00:18:18.210 --> 00:18:21.300 several tech companies have signed up to a voluntary pledge 306 00:18:21.300 --> 00:18:24.210 drafted by the Biden administration. It commits 307 00:18:24.210 --> 00:18:28.080 signatories to investing in AI model cybersecurity, red teaming 308 00:18:28.080 --> 00:18:31.590 against misuse or national security concerns, but accepting 309 00:18:31.590 --> 00:18:35.370 vulnerability reports from third parties, plus introducing 310 00:18:35.370 --> 00:18:38.340 watermarking of AI-developed audio and visual material. 311 00:18:38.940 --> 00:18:43.410 Signatories do include Amazon, Google, Meta, OpenAI, Microsoft, 312 00:18:43.680 --> 00:18:49.920 and now Adobe, IBM, NVIDIA and Salesforce. Microsoft President 313 00:18:49.920 --> 00:18:53.610 Brad Smith, whose company has recently partnered with ChatGPT 314 00:18:53.610 --> 00:18:58.410 maker OpenAI to embed AI in many of its products commented that 315 00:18:58.470 --> 00:19:01.530 AI needs a safety break before it can be deployed without 316 00:19:01.530 --> 00:19:05.790 concerns. And William Dally Chief Scientist and Senior Vice 317 00:19:05.790 --> 00:19:09.570 President of chip designer NVIDIA adds, the way we make 318 00:19:09.570 --> 00:19:13.890 sure we have control over all sorts of AI is keeping a human 319 00:19:13.890 --> 00:19:19.170 in the loop. Now, everybody recognizes that beyond talk of 320 00:19:19.170 --> 00:19:22.710 the end of humankind, there are real risks to AI deployment 321 00:19:22.710 --> 00:19:26.250 right now. Legislation is lacking, making it incumbent on 322 00:19:26.250 --> 00:19:29.250 industry players to each implement their own culture of 323 00:19:29.250 --> 00:19:32.310 risk management during the design, development, deployment 324 00:19:32.310 --> 00:19:36.180 and evaluation of AI systems. They should do so to avoid 325 00:19:36.180 --> 00:19:39.180 creating unnecessary risk for their customers, but also to 326 00:19:39.180 --> 00:19:42.690 ensure compliance with future regulation. And going forward, 327 00:19:42.690 --> 00:19:46.290 regulation is expected to seek to address various unintended 328 00:19:46.290 --> 00:19:50.250 consequences of AI systems. Transparency, limits on access 329 00:19:50.250 --> 00:19:54.180 to consumer information and data safeguards. And this is going to 330 00:19:54.180 --> 00:19:57.150 make compliance increasingly complex to achieve particularly 331 00:19:57.150 --> 00:19:59.910 as different jurisdictions implement their own regulations. 332 00:20:00.660 --> 00:20:03.750 Ideally, we'd like to see agreed standards, because we have laws 333 00:20:03.750 --> 00:20:06.990 of the CEO aviation. But in reality, it's an area where 334 00:20:06.990 --> 00:20:10.620 perceptions of privacy and security vary widely. So if 335 00:20:10.620 --> 00:20:13.830 something's possible, it's likely that somebody, somewhere 336 00:20:13.860 --> 00:20:14.820 will implement it. 337 00:20:15.210 --> 00:20:17.610 Anna Delaney: As you said, Tony, many countries are looking to 338 00:20:17.640 --> 00:20:22.230 form their own AI regulation at the moment. Do you foresee 339 00:20:22.500 --> 00:20:25.350 global harmonization when it comes to AI regulation? And if 340 00:20:25.350 --> 00:20:27.240 so, how challenging might that be? 341 00:20:27.300 --> 00:20:31.950 Tony Morbin: I think there is probably a baseline limit that 342 00:20:31.950 --> 00:20:37.200 people can agree on. But I think it's a bit like, you know, any 343 00:20:37.200 --> 00:20:42.870 kind of legislating for governments. If they decide they 344 00:20:42.870 --> 00:20:45.660 don't want to agree with it, they won't whether they sign up 345 00:20:45.660 --> 00:20:50.100 or not. And if you look at Chinese facial recognition use, 346 00:20:50.700 --> 00:20:55.830 compared to Italy banning ChatGPT because it was carrying 347 00:20:55.830 --> 00:20:58.800 too much data. You know, there's very different perceptions on 348 00:20:58.800 --> 00:21:00.120 what's acceptable and what's not. 349 00:21:01.830 --> 00:21:04.800 Anna Delaney: Very good. Well, so much happening in the world, 350 00:21:04.800 --> 00:21:08.550 in the AI world. So great to have some updates there, Tony. 351 00:21:08.550 --> 00:21:12.090 Thank you. And finally, and just for fun, if you could design a 352 00:21:12.120 --> 00:21:15.750 cybersecurity-themed amusement park ride, what would it be 353 00:21:15.750 --> 00:21:18.120 called? And what thrills would you offer? 354 00:21:19.890 --> 00:21:20.520 Tom Field: I got one for you. 355 00:21:21.990 --> 00:21:22.380 Anna Delaney: Go for it. 356 00:21:22.440 --> 00:21:25.230 Tom Field: It is one of my favorite rides at Disney. It's 357 00:21:25.230 --> 00:21:29.760 called Pirates of the Cryptocurrency. It's a thrill a 358 00:21:29.760 --> 00:21:34.470 minute, you don't know where it's going. Sometimes, you don't 359 00:21:34.470 --> 00:21:35.640 even know which way is up. 360 00:21:38.070 --> 00:21:38.760 Anna Delaney: And there's no seats 361 00:21:38.760 --> 00:21:40.860 Mathew Schwartz: But it came down, really unexpectedly, 362 00:21:40.860 --> 00:21:41.460 quickly. 363 00:21:44.820 --> 00:21:46.260 Anna Delaney: Excellent. Love it! Matthew? 364 00:21:46.710 --> 00:21:50.370 Mathew Schwartz: So I would do something on the cyberpunk front 365 00:21:50.400 --> 00:21:54.930 called Crash Override. And I don't want to be too literal on 366 00:21:54.930 --> 00:21:58.320 how this gets enacted. But basically, I think you could 367 00:21:58.320 --> 00:22:00.930 think you're always about to crash and then somehow, you 368 00:22:00.930 --> 00:22:03.180 don't. And there will be a lot of green lights. That's about as 369 00:22:03.180 --> 00:22:04.290 far as I've gotten so far. 370 00:22:05.190 --> 00:22:06.960 Tom Field: I think I rode that at Universal actually. 371 00:22:11.490 --> 00:22:13.920 Tony Morbin: Well, my issue - the problem with most amusement 372 00:22:13.920 --> 00:22:18.180 park rides is that the passenger is passive. And so the analogy 373 00:22:18.180 --> 00:22:21.150 with cybersecurity is not so good there. Otherwise, I would 374 00:22:21.150 --> 00:22:23.790 have gone for a Ghost Train with appropriate flights along the 375 00:22:23.790 --> 00:22:28.560 way. But instead, I'd go for bumper cars, or Dodgems. But we 376 00:22:28.560 --> 00:22:31.380 have an adversary team out there. So you have to identify 377 00:22:31.380 --> 00:22:33.990 and avoid them, perhaps teaming up with others, even though you 378 00:22:33.990 --> 00:22:37.680 can't be sure that they're not on the other side. So and I'd 379 00:22:37.680 --> 00:22:40.320 leave the name as it is Dodgems, sounds fine to me. 380 00:22:40.980 --> 00:22:43.410 Anna Delaney: Yeah, not that I'm going to roller coaster as well. 381 00:22:43.410 --> 00:22:48.690 The firewall theory, headsets, all the virtual reality, 382 00:22:48.690 --> 00:22:53.970 interactive 4D, roller coaster in the dark, and lots of twists, 383 00:22:54.000 --> 00:22:57.510 unexpected twists and high-speed chases through data tunnels. 384 00:22:58.560 --> 00:23:00.780 Tom Field: We might be seeing Tony's background in that. 385 00:23:02.880 --> 00:23:04.740 Mathew Schwartz: You throw in a little Pew Pew map and you've 386 00:23:04.740 --> 00:23:07.590 got kind of cyber nirvana. 387 00:23:08.040 --> 00:23:10.140 Anna Delaney: Cyber nirvana. That's it. Well, 388 00:23:10.410 --> 00:23:11.790 Tom Field: That will be the name of the park, Mat. 389 00:23:11.910 --> 00:23:15.210 Anna Delaney: Yeah. We got it sorted. 390 00:23:15.720 --> 00:23:18.720 Tony Morbin: Suddenly realized, some kind of paintball game 391 00:23:18.720 --> 00:23:20.910 might have worked there. I was trying to think of something 392 00:23:20.910 --> 00:23:23.610 more active. But yeah, Mat, that's a good idea. 393 00:23:23.760 --> 00:23:26.550 Anna Delaney: As we talk, we are inspired. We are inspired so, 394 00:23:26.760 --> 00:23:29.820 well, thank you so much, Tom, Tony and Mat. It's been a 395 00:23:29.820 --> 00:23:31.140 pleasure as always and lot of fun. 396 00:23:31.500 --> 00:23:32.820 Tom Field: Thanks so much. Good luck with your roundtable. 397 00:23:32.820 --> 00:23:35.580 Can't wait to hear what you come back with about the quantum 398 00:23:35.580 --> 00:23:36.000 divide. 399 00:23:36.390 --> 00:23:37.500 Anna Delaney: I'll be back. 400 00:23:38.040 --> 00:23:39.780 Tony Morbin: Quantum divide won't be very big there, will 401 00:23:39.780 --> 00:23:39.930 it? 402 00:23:43.260 --> 00:23:44.550 Anna Delaney: Whose to judge 403 00:23:45.450 --> 00:23:46.770 Tony Morbin: Because of the quantum, you know, 404 00:23:48.570 --> 00:23:49.740 Mathew Schwartz: One qubit at a time. 405 00:23:49.950 --> 00:23:52.680 Anna Delaney: One qubit at a time. Thanks so much. And thank 406 00:23:52.680 --> 00:23:54.630 you so much for watching. Until next time.