WEBVTT 1 00:00:07.140 --> 00:00:09.600 Anna Delaney: Welcome to the ISMG Editors' Panel. I'm Anna 2 00:00:09.600 --> 00:00:12.540 Delaney and today we're addressing legal and ethical 3 00:00:12.540 --> 00:00:15.450 challenges that are reshaping the tech landscape. Our 4 00:00:15.450 --> 00:00:18.540 discussion will cover the U.S. Justice Department's antitrust 5 00:00:18.540 --> 00:00:21.630 lawsuit against Apple, the complexities of ransomware 6 00:00:21.630 --> 00:00:24.390 payments in Bitcoins, and the legal challenges CISOs 7 00:00:24.390 --> 00:00:27.180 encounter, along with the alleged breach of Catherine, 8 00:00:27.180 --> 00:00:30.210 Princess of Wales's medical records, highlighting data 9 00:00:30.210 --> 00:00:34.410 privacy concerns in the era of GDPR and DPA. And to do so 10 00:00:34.410 --> 00:00:37.350 joining us is our great friend, lawyer, Jonathan Armstrong, 11 00:00:37.410 --> 00:00:41.280 adjunct professor at the Fordham Law School, and formally partner 12 00:00:41.460 --> 00:00:44.460 at Cordery Compliance. Jonathan, it's a real honor to have you 13 00:00:44.460 --> 00:00:45.090 join us. 14 00:00:45.720 --> 00:00:47.310 Jonathan Armstrong: Yeah, I'm honored as well. Thanks very 15 00:00:47.310 --> 00:00:48.810 much for having me on board. 16 00:00:49.230 --> 00:00:51.960 Anna Delaney: Our pleasure and completing our quartet, Matthew 17 00:00:51.960 --> 00:00:55.230 Schwartz, executive editor of DataBreachToday in Europe, and 18 00:00:55.230 --> 00:00:59.940 Tony Morbin, executive news editor for the EU. Great to see 19 00:00:59.940 --> 00:01:04.770 you all. And Jonathan, you have a new role, I believe do tell us 20 00:01:04.770 --> 00:01:06.270 about it and what are you up to these days? 21 00:01:06.900 --> 00:01:12.390 Jonathan Armstrong: Yeah, I do! I've left Cordery, and I'm doing 22 00:01:12.390 --> 00:01:17.400 a few new things, so something new to be announced in the late 23 00:01:17.400 --> 00:01:20.640 summer. But, in the meantime, I'm helping a business called 24 00:01:20.640 --> 00:01:25.290 Elevate, get off the ground, and that looks at training 25 00:01:25.350 --> 00:01:29.490 non-executive directors. And whether that be somebody who's 26 00:01:29.490 --> 00:01:32.430 not been a non-Executive Director before who might have 27 00:01:32.430 --> 00:01:35.430 skills that a board might need, like cybersecurity skills, 28 00:01:35.430 --> 00:01:39.090 information technology skills, or it might be training an 29 00:01:39.090 --> 00:01:43.500 existing board to fill the gaps that they have. So, that might 30 00:01:43.500 --> 00:01:47.520 be technology as well. But you're keeping busy. I am yeah, 31 00:01:47.520 --> 00:01:49.650 I'm busier now than when I was in full-time work. 32 00:01:50.820 --> 00:01:53.010 Anna Delaney: Well, where are you in your virtual world? You 33 00:01:53.010 --> 00:01:54.300 look like you're out in the wild there. 34 00:01:54.000 --> 00:01:56.340 Very good. And, that's in Cornwall. 35 00:01:54.000 --> 00:01:56.838 Jonathan Armstrong: In my virtual world. I'm on Polzeath 36 00:01:56.340 --> 00:01:58.590 Yes, it is. 37 00:01:56.917 --> 00:02:01.963 Beach, which is where I hope to be tomorrow night once I've left 38 00:01:58.740 --> 00:02:06.120 Just to clarify. And Tony, you're also out in the world. 39 00:02:02.041 --> 00:02:03.540 the London traffic. 40 00:02:06.930 --> 00:02:16.110 Tony Morbin: Yes, just down by the Old Bailey, talking about 41 00:02:16.140 --> 00:02:20.280 justice, accountability, and particularly looking at 42 00:02:20.280 --> 00:02:23.760 accountability for the board and CISOs and what their legal 43 00:02:24.480 --> 00:02:25.230 issues are 44 00:02:25.890 --> 00:02:29.250 Anna Delaney: very good. Very much in the wild, Mat. 45 00:02:29.970 --> 00:02:32.010 Mathew Schwartz: Am I know. I feel like we should have a 46 00:02:32.040 --> 00:02:35.850 Travel Edition. But yes, these are daffodils outside my local 47 00:02:35.850 --> 00:02:42.360 library. So, a spot of calm in the urban oasis or desert or 48 00:02:42.360 --> 00:02:44.790 whatever you want to say. But anyway, it's just beautiful here 49 00:02:44.790 --> 00:02:46.350 in Scotland when it's not raining, 50 00:02:47.050 --> 00:02:52.360 Anna Delaney: Lovely spring-like lovely! And I am sharing you one 51 00:02:52.360 --> 00:02:55.150 of the greatest cityscapes on Earth. This is of course, 52 00:02:55.180 --> 00:02:59.500 London, and was taken from a rooftop last night overlooking 53 00:02:59.500 --> 00:03:02.680 the Thames. So Jonathan, we have a few questions for you. And at 54 00:03:02.680 --> 00:03:04.120 this point, I'll hand over to Mat. 55 00:03:04.540 --> 00:03:06.310 Mathew Schwartz: I wish I was jet setting for that roof, 56 00:03:06.310 --> 00:03:09.580 Jonathan, no offense, we could be doing it from there. Maybe 57 00:03:09.580 --> 00:03:14.320 next time. But, here we are. And I know what are the things that 58 00:03:14.320 --> 00:03:19.450 you've been tracking are Bitcoins, and the payment of 59 00:03:19.450 --> 00:03:24.100 Bitcoins for ransomware. Not that we advocate this sort of 60 00:03:24.100 --> 00:03:28.150 thing, but obviously it does happen. And when a business 61 00:03:28.150 --> 00:03:33.010 decides that it needs to happen, these are some dicey waters, I 62 00:03:33.010 --> 00:03:36.940 think, because you could be falling afoul of sanctions if 63 00:03:36.940 --> 00:03:41.080 you're a CISO. How does a CISO go about handling this without 64 00:03:41.080 --> 00:03:42.550 getting themselves into hot water? 65 00:03:43.800 --> 00:03:45.330 Jonathan Armstrong: Yeah, I think that's a really great 66 00:03:45.330 --> 00:03:48.900 question. And I think that it's a really challenging situation, 67 00:03:48.930 --> 00:03:55.740 I've always thought it required more thought to pay a ransom. 68 00:03:55.980 --> 00:04:02.490 And I know, statistically, a lot of organizations are still of 69 00:04:02.490 --> 00:04:05.790 the view that paying gets rid of the issue. I don't think it ever 70 00:04:05.790 --> 00:04:12.360 does. And I think the use of the sanctions regime, by the U.S. 71 00:04:12.360 --> 00:04:17.970 and the U.K. authorities has definitely upped the stakes. 72 00:04:18.180 --> 00:04:21.300 And, I think, obviously, there are a couple of reasons for 73 00:04:21.300 --> 00:04:25.020 that. I think the first is attribution is always 74 00:04:25.020 --> 00:04:29.820 challenging in ransomware. And because these gangs change 75 00:04:29.850 --> 00:04:34.080 shape, it's a little bit like trying to nail a jellyfish to 76 00:04:34.080 --> 00:04:38.370 the wall? And for the avoidance of doubt, that's not what I'll 77 00:04:38.370 --> 00:04:43.590 be doing on Polzeath beach tomorrow night. But, the gangs 78 00:04:43.590 --> 00:04:48.720 change shape so much that you're not ever clear which individuals 79 00:04:48.720 --> 00:04:52.710 you're dealing with behind those gangs. So, potentially, there's 80 00:04:52.710 --> 00:04:54.420 always a risk that you're dealing with sanctioned 81 00:04:54.420 --> 00:04:58.860 individuals, or that you're dealing with entire states like 82 00:04:58.920 --> 00:05:02.730 North Korea that are sanction. So, there's always that risk 83 00:05:04.200 --> 00:05:09.000 with the actual threat actors themselves. And then I think 84 00:05:09.030 --> 00:05:13.320 adding Bitcoin adds another level of complexity because 85 00:05:13.320 --> 00:05:18.360 obviously, some of the mixers are also sanctioned, and some of 86 00:05:18.360 --> 00:05:22.170 the banks at the other end might be sanctioned as well. So not 87 00:05:22.170 --> 00:05:26.580 only do you have a lack of clarity on attribution and who 88 00:05:26.580 --> 00:05:30.840 you are paying, but you also have a lack of clarity on how 89 00:05:30.840 --> 00:05:38.040 you are paying them, and that route from your cash to where 90 00:05:38.040 --> 00:05:42.210 and when they cash out at the other end. So, I think there are 91 00:05:42.210 --> 00:05:50.340 no easy answers for CISOs, and a lot of organizations that I see 92 00:05:50.340 --> 00:05:54.360 doing this really maturely are having those discussions 93 00:05:54.690 --> 00:05:59.520 pre-breach not after and meeting as a board to decide where their 94 00:05:59.760 --> 00:06:03.330 risk tolerances are, I have to tell you that I think many of 95 00:06:03.330 --> 00:06:08.010 them are almost reversing their default position. Their default 96 00:06:08.010 --> 00:06:13.740 position is "We will not pay unless and until we find that 97 00:06:13.740 --> 00:06:17.700 it's safe to do so, and there's a compelling business reason." 98 00:06:18.330 --> 00:06:21.720 And the other thing that I'm simply not sure about - and I 99 00:06:21.720 --> 00:06:25.680 don't know whether we have the evidence to back this up at all 100 00:06:25.680 --> 00:06:31.080 - is whether it's changing threat actors' behavior, as 101 00:06:31.080 --> 00:06:35.730 well. Most of the breaches that I've been involved with recently 102 00:06:36.030 --> 00:06:43.170 are attacks on vendors of what the customers usually thought 103 00:06:43.170 --> 00:06:47.130 was non-core services. So, we can argue whether they were core 104 00:06:47.130 --> 00:06:50.790 or not, but things like clocking systems, payroll providers, 105 00:06:51.120 --> 00:06:57.510 those common providers. I guess that ransomware threat actors 106 00:06:57.510 --> 00:07:03.480 play a numbers game there, "If I compromised 300, then maybe a 107 00:07:03.480 --> 00:07:09.180 100 will pay the ransom, and that's still worth my effort." 108 00:07:09.660 --> 00:07:13.320 What I'm interested in - and I suspect, we haven't got any 109 00:07:13.320 --> 00:07:17.910 statistics yet, whether we will, I don't know - is if some of the 110 00:07:18.240 --> 00:07:23.490 ransomware threat actors, because they're intelligent and 111 00:07:23.490 --> 00:07:27.630 follow developments like this will divert their efforts to 112 00:07:27.630 --> 00:07:32.640 non-U.K. non-U.S. targets more, because they think it will be an 113 00:07:32.640 --> 00:07:38.160 easier ride to pay the ransom. If I'm a ransomware threat 114 00:07:38.160 --> 00:07:42.150 actor, and I've got a choice between compromising a 115 00:07:42.150 --> 00:07:45.690 French-based payroll provider or a U.K. or a U.S.-based one? 116 00:07:46.560 --> 00:07:49.410 Well, I pick the French instead. And I don't know the answer to 117 00:07:49.410 --> 00:07:52.860 any of that yet, but I think that's the other interesting 118 00:07:52.860 --> 00:07:58.980 bit. Whilst, therefore, we might say that sanctions are almost a 119 00:07:58.980 --> 00:08:03.480 shot in the dark, I think they are changing corporate mindsets, 120 00:08:03.960 --> 00:08:10.200 and I think that there's at least an assumption that threat 121 00:08:10.200 --> 00:08:12.210 actor behaviors could change as well. 122 00:08:13.740 --> 00:08:17.100 Mathew Schwartz: Fascinating, so many angles to that, that we're 123 00:08:17.130 --> 00:08:21.030 obviously waiting to see how a lot of this shakes out, as you 124 00:08:21.000 --> 00:08:23.915 Jonathan Armstrong: Yeah, but again the simple message for the 125 00:08:21.030 --> 00:08:21.480 say, 126 00:08:23.984 --> 00:08:27.733 CISO, which we've said endlessly, haven't we? Is don't 127 00:08:27.802 --> 00:08:31.828 walk this journey alone. This isn't the CISOs call in most 128 00:08:31.897 --> 00:08:36.062 corporations, and you need the compliance team, you need the 129 00:08:36.132 --> 00:08:39.949 legal team, you probably need the board to do the heavy 130 00:08:40.019 --> 00:08:44.322 lifting on deciding if we are going to pay, how we're going to 131 00:08:44.392 --> 00:08:44.670 pay. 132 00:08:46.010 --> 00:08:49.190 Mathew Schwartz: Excellent. Great advice! Well, one more 133 00:08:49.190 --> 00:08:52.460 question for you. Before I turn over to my colleagues. There has 134 00:08:52.460 --> 00:08:57.710 been a lot of discussion recently about certain royal 135 00:08:57.740 --> 00:09:02.090 personages' - I think I use that word correctly - healthcare 136 00:09:02.090 --> 00:09:06.170 records, and we talk about data breaches all the time. Sometimes 137 00:09:06.170 --> 00:09:09.380 we talk about it with high-profile individuals. And it 138 00:09:09.380 --> 00:09:12.110 seems that one of the recent high-profile individuals 139 00:09:12.320 --> 00:09:15.860 affected by a data breach may have been Catherine, Princess of 140 00:09:15.860 --> 00:09:21.470 Wales. There's been some question about medical issues, 141 00:09:21.500 --> 00:09:25.820 unfortunately for her, and there seems to have been perhaps a 142 00:09:25.820 --> 00:09:30.470 delayed notification to the ICO that somebody inside a 143 00:09:30.470 --> 00:09:32.630 healthcare facility was attempting to snoop on her 144 00:09:32.630 --> 00:09:37.250 records. What's your reaction as someone who's followed these 145 00:09:37.250 --> 00:09:41.120 data protection statutes for so long? Is this a surprise to you? 146 00:09:41.120 --> 00:09:44.510 Human nature can sometimes trump even the strongest of data 147 00:09:44.510 --> 00:09:45.770 protection rules it seems. 148 00:09:45.000 --> 00:09:48.209 Jonathan Armstrong: I think that's definitely true. I think 149 00:09:48.291 --> 00:09:53.146 there are often a number of aspects to cases like this, and 150 00:09:53.228 --> 00:09:58.413 we've been involved in some. I mean, I think the legal position 151 00:09:58.495 --> 00:10:03.350 is somewhat easier since the changes to the Data Protection 152 00:10:03.432 --> 00:10:08.534 Act in 2018. And we've always had criminal offenses under Data 153 00:10:08.616 --> 00:10:13.800 Protection legislation pre-GDPR and post. And that's not common 154 00:10:13.883 --> 00:10:19.231 across the EU, and it's not part of GDPR. It's the U.K.'s wrapper 155 00:10:19.314 --> 00:10:24.416 around GDPR to some extent, and the moving across of old laws. 156 00:10:24.498 --> 00:10:29.682 And there are potential offences under the Computer Misuse Act, 157 00:10:29.764 --> 00:10:34.949 if you access a system for which you have authorized access for 158 00:10:35.031 --> 00:10:39.392 an unauthorized purpose, and there are also potential 159 00:10:39.474 --> 00:10:44.659 criminal offences under Section 170 of the 2018 Data Protection 160 00:10:44.741 --> 00:10:49.843 Act. Section 170 is quite useful in that you can also commit a 161 00:10:49.925 --> 00:10:54.616 criminal offense if you're the recipient of data, and you 162 00:10:54.698 --> 00:10:59.882 refuse to hand it back. And 170 could come into play here if it 163 00:10:59.964 --> 00:11:05.148 is true, that the records were taken almost to order for a U.S. 164 00:11:05.231 --> 00:11:10.250 news outlet. If the U.S. news outlet is served with a Section 165 00:11:10.333 --> 00:11:15.188 170 notice and is told to hand the documents back then they 166 00:11:15.270 --> 00:11:20.289 could commit a criminal offense. And the criminal offense can 167 00:11:20.372 --> 00:11:24.733 also be committed by a director/manager, you know, so 168 00:11:24.815 --> 00:11:29.012 it can be committed by individuals as well. I think 169 00:11:29.094 --> 00:11:33.867 cases like this are relatively common. It's not often that 170 00:11:33.949 --> 00:11:39.051 you're taking medical details of a princess to sell a story, I 171 00:11:39.134 --> 00:11:44.235 don't think that's super common. But, it's much more common in 172 00:11:44.318 --> 00:11:49.502 areas like accident claims, we see every maybe couple of months 173 00:11:49.584 --> 00:11:54.192 a prosecution where, as a general rule - I don't want to 174 00:11:54.275 --> 00:11:58.883 over-generalize - but as a general rule, boyfriend works 175 00:11:58.965 --> 00:12:03.655 for accident repair shop, gets girlfriend to pull list of 176 00:12:03.738 --> 00:12:08.757 recent road traffic accident patients, and then mails them to 177 00:12:08.840 --> 00:12:13.612 try and sell them car hire or accident claims services, or 178 00:12:13.695 --> 00:12:18.385 whatever that might be. That's relatively common. Clearly 179 00:12:18.467 --> 00:12:23.487 extracting data to move from one job to another is relatively 180 00:12:23.569 --> 00:12:28.671 common. I think that's become more common during the pandemic. 181 00:12:28.753 --> 00:12:33.691 The particular challenge for health service organizations in 182 00:12:33.773 --> 00:12:38.217 circumstances like this is, obviously - again, without 183 00:12:38.299 --> 00:12:43.154 overgeneralizing - people who work in clerical positions in 184 00:12:43.236 --> 00:12:47.598 hospitals tend to be really poorly paid. And I guess, 185 00:12:47.680 --> 00:12:52.370 American news outlets after a scoop tend to reward pretty 186 00:12:52.453 --> 00:12:57.308 highly. So, there's always this balance whenever you've got 187 00:12:57.390 --> 00:13:02.409 employees with access to super sensitive data, you don't want 188 00:13:02.492 --> 00:13:07.840 to pay at the absolute bottom of the pay scale, because then it's 189 00:13:07.923 --> 00:13:12.366 easier to get them to do bad things. And, we see that, 190 00:13:12.449 --> 00:13:17.386 particularly, in the area of outsourcing. You know, if we're 191 00:13:17.468 --> 00:13:22.159 outsourcing something to Manila, where the rate of pay is 192 00:13:22.241 --> 00:13:27.425 relatively low, then the cost to bribe that individual is lower 193 00:13:27.507 --> 00:13:32.362 as well, as a general rule. What it takes to bribe you is a 194 00:13:32.445 --> 00:13:37.629 factor of your salary. And so, we've seen cases back in the day 195 00:13:37.711 --> 00:13:42.648 where Indian contact center workers, for example, were asked 196 00:13:42.731 --> 00:13:48.080 how many records they would give for a Snickers, and one guy gave 197 00:13:48.162 --> 00:13:53.099 a floppy disk full - in those days that we had floppy disks, 198 00:13:53.181 --> 00:13:58.201 look it up kids on the call - gave a floppy disk full of bank 199 00:13:58.283 --> 00:14:02.891 customers in exchange for a Snickers. So, there's always 200 00:14:02.974 --> 00:14:07.829 that difficulty. The last thing I'd say on this, though, is 201 00:14:07.911 --> 00:14:12.684 that, obviously, we know that the ICO are on the case they 202 00:14:12.766 --> 00:14:17.950 said on March 20, that they had received a report, maybe it was 203 00:14:18.032 --> 00:14:22.970 late should have been reported in the 72 hours. The hospital 204 00:14:23.052 --> 00:14:28.072 might not be off the hook, the hospital has to take technical 205 00:14:28.154 --> 00:14:33.338 and organizational measures to prevent data theft. It's a known 206 00:14:33.420 --> 00:14:38.275 thing, whenever you've got celebrity patients, it's more of 207 00:14:38.358 --> 00:14:42.884 a risk. They obviously have to have in place a training 208 00:14:42.966 --> 00:14:48.068 program. Online training will not be adequate for these risks. 209 00:14:48.150 --> 00:14:52.676 So, if I was them, I'd want to be seeing a face to face 210 00:14:52.758 --> 00:14:57.531 training program for those individuals with access to this 211 00:14:57.613 --> 00:15:02.386 data, I'd want to see access controls, I might want to see 212 00:15:02.468 --> 00:15:07.323 some sort of heuristic-type system running over the network 213 00:15:07.406 --> 00:15:11.685 looking at access, and I definitely want to see very 214 00:15:11.767 --> 00:15:16.869 detailed access control measures to make sure that only people 215 00:15:16.951 --> 00:15:21.477 with the need to know access that data. If the hospital 216 00:15:21.559 --> 00:15:26.908 hasn't done all of those things, then I think there's a risk that 217 00:15:26.990 --> 00:15:32.092 the hospital might be fined too. And bear in mind that the ICO 218 00:15:32.174 --> 00:15:36.700 has fined hospitals and pharmacists and people involved 219 00:15:36.783 --> 00:15:41.473 in medicine previously for incidences when employees have 220 00:15:41.555 --> 00:15:46.328 been careless with data. So, they're not off the hook yet, 221 00:15:46.410 --> 00:15:49.620 from what little we know at this stage. 222 00:15:52.200 --> 00:15:54.443 Mathew Schwartz: Well, great lessons that can be learned from 223 00:15:54.493 --> 00:15:57.144 this among these other incidents. Thank you so much. 224 00:15:57.195 --> 00:15:58.980 I'm going to hand you over to Tony. 225 00:15:59.920 --> 00:16:02.260 Tony Morbin: Great, thanks very much. Jonathan, I was getting 226 00:16:02.950 --> 00:16:04.960 very interested when you were saying that you're getting 227 00:16:04.960 --> 00:16:08.860 involved in training for the board and training the board on 228 00:16:08.860 --> 00:16:12.520 cybersecurity and those areas. Because my question is related 229 00:16:12.520 --> 00:16:15.970 to a recent U.K. government survey, which showed that many 230 00:16:15.970 --> 00:16:19.510 boards are remaining under engaged in cybersecurity, 231 00:16:19.510 --> 00:16:22.870 there's a lack of cyber expertise on the board. So, my 232 00:16:22.870 --> 00:16:26.290 question was - I'm really balling you under arm here - how 233 00:16:26.290 --> 00:16:31.480 can you rectify this situation, but also is cyberinsurance and 234 00:16:31.480 --> 00:16:34.420 directors and officers liability insurance undermining our 235 00:16:34.420 --> 00:16:36.610 ability to make boards accountable? 236 00:16:38.280 --> 00:16:40.020 Jonathan Armstrong: I think they're really great questions. 237 00:16:40.230 --> 00:16:43.800 I think that we are seeing more of a move towards board 238 00:16:43.800 --> 00:16:47.730 responsibility. So, if we look at things like DORA and the U.K. 239 00:16:47.730 --> 00:16:51.810 equivalent, then there's more of a concentration on individual 240 00:16:51.810 --> 00:16:55.470 accountability and those at the top of being accountable, and 241 00:16:55.470 --> 00:17:01.050 that's probably a good thing. I think there is definitely a lack 242 00:17:01.110 --> 00:17:05.160 of cybersecurity and tech skills on boards, obviously, otherwise, 243 00:17:05.160 --> 00:17:10.740 I wouldn't be wasting my time with Elevate and getting that 244 00:17:10.740 --> 00:17:15.480 off the ground. And, by the way, that's not just a U.K. issue, 245 00:17:15.480 --> 00:17:20.280 there's a EY study that I think says something from memory, like 246 00:17:20.730 --> 00:17:29.700 56%, maybe, of Fortune 100 boards - I think it is - have a 247 00:17:29.700 --> 00:17:34.470 gap in terms of cybersecurity skills. So, I think it 248 00:17:34.470 --> 00:17:40.290 definitely is an issue. And, DNO insurance is often seen as the 249 00:17:40.290 --> 00:17:44.400 panacea, that we don't have to do things well because we've 250 00:17:44.400 --> 00:17:51.180 insured against doing them badly. But, I think insurers are 251 00:17:51.180 --> 00:17:57.030 being much more assiduous in asking questions of 252 00:17:57.300 --> 00:18:00.930 organizations and obviously, still some are struggling to get 253 00:18:00.930 --> 00:18:07.530 cover or cover at the right price. And I think, the lack of 254 00:18:07.530 --> 00:18:10.890 awareness is an issue. Obviously, whenever you're 255 00:18:10.890 --> 00:18:16.050 involved in a major data breach, then usually the non-executive 256 00:18:16.050 --> 00:18:22.800 directors will want to be involved. If there is a sense 257 00:18:23.010 --> 00:18:27.870 that the executives in the board should have done more to prevent 258 00:18:27.870 --> 00:18:32.310 the data breach, then you might find that the non execs are 259 00:18:32.310 --> 00:18:36.270 leading the investigation into the data breach, and leading the 260 00:18:36.270 --> 00:18:40.440 response. And, I'm in the wrong position to say this, but 261 00:18:40.470 --> 00:18:47.490 usually it's older, white men. And, I've been in a situation, 262 00:18:47.490 --> 00:18:51.840 for example, where the non-exec director who the board wanted to 263 00:18:51.840 --> 00:18:57.570 lead the response to what was quite a technical data breach, I 264 00:18:57.570 --> 00:19:01.200 had to show him how to switch his iPad on to start the 265 00:19:01.290 --> 00:19:06.480 meeting. And, we do need to upskill boards, and that's 266 00:19:06.540 --> 00:19:10.020 obviously upskilling existing board members so that they 267 00:19:10.020 --> 00:19:15.090 understand the risk. And in many cases, it will be getting new 268 00:19:15.090 --> 00:19:20.700 board members in with that diversity of background and 269 00:19:20.700 --> 00:19:24.810 diversity of skills that are going to enable the board to 270 00:19:24.810 --> 00:19:29.280 respond more than maybe the last thing I'd say is the time to 271 00:19:29.280 --> 00:19:33.660 learn all this stuff isn't in the heat of a breach. So, you've 272 00:19:33.660 --> 00:19:36.870 got to rehearse and you've got to rehearse as a board. You've 273 00:19:36.870 --> 00:19:40.650 got to do your playbook work out whose roles and responsibilities 274 00:19:40.920 --> 00:19:43.980 there are. You've also got to educate the board on the need to 275 00:19:43.980 --> 00:19:48.120 respond quickly and look at all those various issues involved. 276 00:19:49.740 --> 00:19:51.870 Tony Morbin: Okay, I'd like to sort of follow up my other 277 00:19:51.870 --> 00:19:54.600 question also on the area of accountability and 278 00:19:54.600 --> 00:19:57.210 responsibility, and I'm particularly thinking about 279 00:19:57.480 --> 00:20:00.570 things like NIST 2.0 where they're actually having named 280 00:20:00.570 --> 00:20:02.520 people with criminal liability. 281 00:20:02.940 --> 00:20:03.180 Jonathan Armstrong: Yeah. 282 00:20:03.180 --> 00:20:05.190 Tony Morbin: Considering these, the diverse range of 283 00:20:05.190 --> 00:20:07.980 stakeholders involved in cybersecurity risk and now 284 00:20:07.980 --> 00:20:11.670 generative AI risk, should the cybersecurity responsibilities 285 00:20:11.790 --> 00:20:17.370 be distinct from AI security tasks? And also, who should 286 00:20:17.370 --> 00:20:20.970 ultimately be accountable for overarching security risks at a 287 00:20:20.970 --> 00:20:24.420 board level? Or is it personal at all? Is it going to be a risk 288 00:20:24.780 --> 00:20:25.320 committee? 289 00:20:26.650 --> 00:20:28.990 Jonathan Armstrong: Yeah, I think that's a really good 290 00:20:29.140 --> 00:20:33.670 question as well. I spoke at two conferences on two consecutive 291 00:20:33.670 --> 00:20:38.110 days a week before last, one - two cyber security professionals 292 00:20:38.110 --> 00:20:42.940 and one - two, chief compliance officers. And obviously, I said, 293 00:20:43.000 --> 00:20:45.340 "Who's responsible for things like NIST 2.0? Who's responsible 294 00:20:45.340 --> 00:20:49.210 for DORA?" All of the compliance officers said the CISO, all of 295 00:20:49.210 --> 00:20:55.000 the CISOs said, the compliance officer, and I said to the both 296 00:20:55.000 --> 00:21:00.700 groups, you know, when you go back to the office if you're the 297 00:21:00.700 --> 00:21:04.630 chief compliance officer take the CISO out to lunch. I have no 298 00:21:06.010 --> 00:21:11.020 dog in the fight as to who it is, except that I would say that 299 00:21:11.020 --> 00:21:13.870 if you follow the traditional compliance, if you're an 300 00:21:13.870 --> 00:21:17.770 organization who follows the traditional compliance lines of 301 00:21:17.770 --> 00:21:21.220 accountability, then your compliance officer probably 302 00:21:21.220 --> 00:21:24.940 shouldn't be the one taking active decisions, they should be 303 00:21:25.210 --> 00:21:30.160 checking that those decisions have been taken. So as a result, 304 00:21:30.460 --> 00:21:33.460 probably first-time responsibility probably does 305 00:21:33.460 --> 00:21:40.720 rest somewhere with the CISO, I probably agree with you, that AI 306 00:21:41.020 --> 00:21:46.480 maybe needs a different set of decision makers. Whenever we 307 00:21:46.480 --> 00:21:51.010 look at GDPR, for example, regulators are getting more 308 00:21:51.010 --> 00:21:55.030 acute, about looking at conflict of interest. So, GDPR says that 309 00:21:55.030 --> 00:21:59.860 a Data Protection Officer can do other duties, but his other 310 00:21:59.860 --> 00:22:04.090 duties must not conflict with his role as a DPO. And I think 311 00:22:04.090 --> 00:22:08.770 we will end up with a situation like that with AI, as well, to 312 00:22:08.770 --> 00:22:14.680 some extent, and obviously, the EU AI act is now passed, but it 313 00:22:14.680 --> 00:22:19.120 isn't going to be in for two years. Businesses can volunteer 314 00:22:19.120 --> 00:22:23.890 to adopt early, I think few will. But, I think we do need 315 00:22:23.890 --> 00:22:29.920 that same check and balance and distance from the business with 316 00:22:29.920 --> 00:22:34.180 AI versus the CISO. And obviously, in some cases, it 317 00:22:34.180 --> 00:22:40.630 will be the CISO. That's adopting AI tools to help with 318 00:22:40.630 --> 00:22:45.010 the security posture. So, I think for most organizations, 319 00:22:45.160 --> 00:22:50.140 there needs to be a period of reflection. I've been working 320 00:22:50.140 --> 00:22:56.200 this morning, on something for a large corporation that's 321 00:22:57.070 --> 00:23:01.840 altering its code of ethics, across the whole business, to 322 00:23:01.840 --> 00:23:07.810 say that to have provisions in about AI, how does AI fit within 323 00:23:07.810 --> 00:23:11.710 its compliance and ethics framework, who's responsible, 324 00:23:11.710 --> 00:23:17.410 etc, etc. I suspect that for most organizations, again, it 325 00:23:17.410 --> 00:23:20.890 goes down to the board doesn't it? The board needs to have the 326 00:23:20.890 --> 00:23:25.060 right skills to be able to understand AI. And, that isn't 327 00:23:25.060 --> 00:23:28.180 saying, we don't have the skills, we're going to opt out, 328 00:23:28.420 --> 00:23:34.330 because shareholders will demand that businesses adopt AI when 329 00:23:34.330 --> 00:23:38.950 it's sensible to do so. We're already seeing pressure in that 330 00:23:38.950 --> 00:23:43.270 environment. You know, the Horizon Report, for example, the 331 00:23:43.300 --> 00:23:46.690 suggestion seems to be that a proper use of 332 00:23:48.100 --> 00:23:53.800 technology-assisted review, so an AI tool on the e-discovery 333 00:23:53.800 --> 00:23:59.230 software. From what I hear, there's credible evidence to 334 00:23:59.230 --> 00:24:03.910 suggest that if they'd have used that AI functionality, the 335 00:24:03.910 --> 00:24:09.310 review would have been quicker, cheaper, and more accurate, and 336 00:24:09.310 --> 00:24:14.260 potentially less consequences for the post-masters and 337 00:24:14.260 --> 00:24:18.280 post-mistresses involved. So you can't just opt out of AI. You 338 00:24:18.280 --> 00:24:21.160 can't put your hands over your ears and pretend it isn't 339 00:24:21.160 --> 00:24:26.230 happening. And boards need that level of skill to distinguish 340 00:24:26.440 --> 00:24:31.930 where their pounds and dollars are being spent properly. And, 341 00:24:31.930 --> 00:24:32.950 what the risks are. 342 00:24:34.470 --> 00:24:37.080 Tony Morbin: I totally concur. I think you're right when you gave 343 00:24:37.080 --> 00:24:40.560 that lovely example of the two opposite answers that people 344 00:24:40.560 --> 00:24:43.710 are. It hasn't been decided it's still up for grabs at the 345 00:24:43.710 --> 00:24:47.280 moment. I'm going to hand you back now to Anna but thanks very 346 00:24:47.280 --> 00:24:47.880 much, Jonathan. 347 00:24:48.330 --> 00:24:49.320 Jonathan Armstrong: My pleasure. 348 00:24:49.560 --> 00:24:52.050 Anna Delaney: Excellent. This is inspiring so many stories. 349 00:24:52.050 --> 00:24:54.960 Jonathan. Well, I just want to ask you about a significant 350 00:24:54.960 --> 00:24:58.080 story which erupted last week in the tech world and that's the 351 00:24:58.110 --> 00:25:01.650 antitrust lawsuit filed by the U.S. Justice Department and a 352 00:25:01.740 --> 00:25:05.370 coalition of states against Apple, saying that Apple has 353 00:25:05.430 --> 00:25:08.790 prioritized its stronghold in the smartphone market at the 354 00:25:08.790 --> 00:25:12.660 expense of user privacy and security. So, I'd love to hear 355 00:25:12.660 --> 00:25:15.510 your take, Jonathan, on what this means like, what are the 356 00:25:15.510 --> 00:25:19.620 implications of this case? And how likely is the DOJ to 357 00:25:19.620 --> 00:25:23.100 prevail? What might it mean for iPhone users and cybersecurity 358 00:25:23.100 --> 00:25:23.550 firms? 359 00:25:25.550 --> 00:25:27.380 Jonathan Armstrong: Well, I will predict that the case won't be 360 00:25:27.380 --> 00:25:34.130 over this year. I'd also think that it's a long-running battle, 361 00:25:34.130 --> 00:25:39.890 we're going to see more and more antitrust and competition law 362 00:25:39.920 --> 00:25:46.310 aspects in the tech world. And maybe about four or five years 363 00:25:46.310 --> 00:25:52.070 ago, I interviewed Max Schrems, the privacy campaigner. And, we 364 00:25:52.070 --> 00:25:55.760 had a really good debate, I think, about this. And the 365 00:25:55.970 --> 00:26:00.920 proposition that we discussed is almost like a triangular form of 366 00:26:01.130 --> 00:26:06.200 tech regulation. And, that would be data privacy regulators, data 367 00:26:06.200 --> 00:26:09.860 protection regulators, it will be the fair trade regulators - 368 00:26:09.920 --> 00:26:15.920 the FTC and the U.S., the CMA in the U.K., for example- and also 369 00:26:15.920 --> 00:26:20.810 competition law regulators, as well. And in some respects, I 370 00:26:20.810 --> 00:26:26.450 think, the DOJ whilst it has many arrows in its quiver, it's 371 00:26:26.450 --> 00:26:30.410 acting as effectively as an antitrust regulator in this 372 00:26:30.410 --> 00:26:35.750 case, and obviously, some of the staff on this matter, have done 373 00:26:35.780 --> 00:26:39.920 Federal Trade Commission and more conventional antitrust 374 00:26:40.310 --> 00:26:44.270 cases in the past. So, I don't think it's a surprise that this 375 00:26:44.270 --> 00:26:48.140 is happening, I think we're going to see more and more 376 00:26:48.170 --> 00:26:52.610 impact of, as I say, competition, or antitrust law in 377 00:26:52.610 --> 00:26:57.380 the tech world. I think in AI, particularly, we're going to see 378 00:26:57.380 --> 00:27:02.720 a lot of that, because so much of the generative AI world is 379 00:27:02.720 --> 00:27:08.930 dominated by so few players. And, obviously, antitrust law is 380 00:27:08.930 --> 00:27:13.970 a really cumbersome weapon to use, because it takes time, 381 00:27:14.180 --> 00:27:16.850 because we're arguing about dominant position, 382 00:27:16.850 --> 00:27:20.570 anti-competitive behavior. But this isn't the first rodeo for 383 00:27:20.570 --> 00:27:25.700 Apple, you know, the EU started investigation into Apple in June 384 00:27:25.730 --> 00:27:30.680 2020. I think this is the third investigation into Apple for 385 00:27:30.890 --> 00:27:34.940 anti-competitive practices. But this one, I've not read the 386 00:27:34.940 --> 00:27:39.050 entire complaint - I know, it's 88 pages long, I think - from 387 00:27:39.050 --> 00:27:43.520 what I understand, it's sort of playing to the public audience a 388 00:27:43.520 --> 00:27:47.480 bit more, it's more simplified language than some of the 389 00:27:47.480 --> 00:27:52.670 earlier complaints. And some of it is about almost like emotive 390 00:27:52.700 --> 00:27:58.040 factors, you know, whether people who have Android versus 391 00:27:58.250 --> 00:28:06.440 iOS on their mobile devices feel less privileged, because their 392 00:28:06.440 --> 00:28:11.600 messages appear in blue and not in green. And, is this some new 393 00:28:11.600 --> 00:28:17.720 form of tech apartheid, if you like? And, those behavioral 394 00:28:17.720 --> 00:28:24.260 factors will be really interesting. Conventional 395 00:28:24.590 --> 00:28:29.750 monopolies as a lot of science goes into them, but that's 396 00:28:29.780 --> 00:28:34.520 economists and it's- I've borrowed an office from an 397 00:28:34.520 --> 00:28:38.630 outfit called Pontus Arsalan in London today, they have teams of 398 00:28:38.630 --> 00:28:44.090 analysts that look at antitrust type cases and market share. 399 00:28:44.600 --> 00:28:49.160 That's a science but it's a relatively developed science. 400 00:28:49.460 --> 00:28:56.450 The fact of whether you like the form of an iPhone better than an 401 00:28:56.450 --> 00:29:04.820 Android device is somewhat more a theory, and I know from my 402 00:29:04.970 --> 00:29:10.010 lockdown courses in design, that many say that you know, Johnny 403 00:29:10.010 --> 00:29:24.230 Ives stole that design from the Braun work just postwar. So, how 404 00:29:24.230 --> 00:29:26.060 much of that is Dieter Rams at Braun how much of it is Johnny 405 00:29:26.060 --> 00:29:33.200 Ives and how much of it is antitrust regulators? Is it 406 00:29:33.200 --> 00:29:37.850 there as to get their fingers on? But, we definitely, as I say 407 00:29:37.850 --> 00:29:42.140 we'll see more impact of antitrust and obviously, this 408 00:29:42.140 --> 00:29:46.160 has a real impact for every other business as well. You 409 00:29:46.160 --> 00:29:53.780 know, if you're setting up a new AI system and you're using a 410 00:29:53.840 --> 00:29:58.850 Open AI platform, for example, and Open AI are going to face 411 00:30:00.290 --> 00:30:04.430 allegations that they're a monopoly, then that might impact 412 00:30:04.550 --> 00:30:10.790 your AI operations. Obviously, Apple, part of their defense to 413 00:30:10.790 --> 00:30:14.150 some of their practices, like the walled garden for apps, is 414 00:30:14.150 --> 00:30:19.640 to make it more secure. A resolution might be that they 415 00:30:19.640 --> 00:30:23.720 have to open up the walled garden more, which might make 416 00:30:24.230 --> 00:30:29.540 iOS less secure, if Apple has to be believed. So, it's not just 417 00:30:29.540 --> 00:30:34.250 something that we, you know, buy popcorn and ringside seats for 418 00:30:34.520 --> 00:30:39.980 and watch as a disinterested participant. We've got skin in 419 00:30:39.980 --> 00:30:44.210 this game, as well as organizations as corporations, 420 00:30:44.420 --> 00:30:47.330 because it could affect our security stance, and it could 421 00:30:47.330 --> 00:30:49.130 affect some of the stuff we're building now. 422 00:30:49.680 --> 00:30:51.720 Anna Delaney: Lots of really useful insights there. And, I'm 423 00:30:51.720 --> 00:30:54.150 glad you kept creative in lockdown as well, very 424 00:30:54.150 --> 00:30:58.920 impressed. I want to ask you, as my final question, about the 425 00:30:58.920 --> 00:31:03.270 Common Crawl challenge. The Common Crawl presents a vital 426 00:31:03.270 --> 00:31:07.200 resource for AI developers offering an extensive dataset 427 00:31:07.200 --> 00:31:11.040 that can enhance AI training and development. However, this 428 00:31:11.040 --> 00:31:14.130 wealth of information often includes copyrighted material 429 00:31:14.130 --> 00:31:18.960 raising complex AI, legal and ethical challenges regarding 430 00:31:19.050 --> 00:31:22.980 copyright laws. So, how can AI developers, Jonathan, navigate 431 00:31:22.980 --> 00:31:26.700 this intricate balance between leveraging extensive datasets 432 00:31:26.700 --> 00:31:29.970 like Common Crawl without infringing copyright laws, 433 00:31:29.970 --> 00:31:33.180 especially in light of this sort of chicken and egg dilemma of 434 00:31:33.180 --> 00:31:37.410 needing such material for AI's effectiveness, but potentially 435 00:31:37.410 --> 00:31:40.710 violating copyright in the process? What are your thoughts? 436 00:31:41.850 --> 00:31:44.517 Jonathan Armstrong: Yeah, another great topic, Anna. 437 00:31:44.600 --> 00:31:49.352 Obviously, I've been in a courtroom for a long time. When 438 00:31:49.435 --> 00:31:53.520 I was, I used to think I was pretty good at cross 439 00:31:53.603 --> 00:31:58.771 examination, but I've learned from you three today. I think as 440 00:31:58.855 --> 00:32:03.106 far as Common Crawl is concerned, I think there are 441 00:32:03.189 --> 00:32:08.191 particular issues with that as well. I think the message for 442 00:32:08.274 --> 00:32:12.609 people is that if you're training your AI tool to do 443 00:32:12.692 --> 00:32:18.111 something, you need to watch the quality of the data and where is 444 00:32:18.194 --> 00:32:23.279 that data coming from? For those of you not familiar with it, 445 00:32:23.362 --> 00:32:28.530 there's a research report out by the Mozilla Foundation, which 446 00:32:28.614 --> 00:32:33.615 suggests that about 80% of gen AI is trained on Common Crawl 447 00:32:33.699 --> 00:32:38.617 data. And, Common Crawl is obviously a big dataset. But, it 448 00:32:38.700 --> 00:32:43.452 wasn't originally set up to provide training data. Common 449 00:32:43.535 --> 00:32:48.453 Crawl has all sorts of purposes, if you like, and it partly 450 00:32:48.536 --> 00:32:53.621 originates from, ironically, from a move to give alternatives 451 00:32:53.705 --> 00:32:58.706 to big tech, but it's big tech who seemed to be using Common 452 00:32:58.790 --> 00:33:03.624 Crawl the most. And you and different people have directed 453 00:33:03.708 --> 00:33:08.376 Common Crawl to search different bits of the Internet to 454 00:33:08.459 --> 00:33:13.044 simplify. So, for example, researchers researching hate 455 00:33:13.127 --> 00:33:18.546 crime got Common Crawl to gather elements of hate crime for their 456 00:33:18.629 --> 00:33:23.214 academic research. And obviously, if you're doing stuff 457 00:33:23.297 --> 00:33:27.715 like research, then the copyright rules are different 458 00:33:27.798 --> 00:33:32.633 than if you're an out and out for profit organization. So, 459 00:33:32.717 --> 00:33:37.801 some of the copyright issues originate there. And some of the 460 00:33:37.885 --> 00:33:42.970 issues with chatbots saying bad things, you know, Replica AI, 461 00:33:43.053 --> 00:33:48.388 would be an example, originate, I guess, because stuff like hate 462 00:33:48.471 --> 00:33:53.139 speech is included. For those researchers - and for full 463 00:33:53.223 --> 00:33:58.474 disclosure, my daughter's one - who are researching hate on the 464 00:33:58.558 --> 00:34:03.809 internet and what the impact of that might be. So, we've got an 465 00:34:03.893 --> 00:34:08.644 issue where Common Crawl perhaps was fit for its original 466 00:34:08.727 --> 00:34:13.479 purpose, but probably isn't fit for the purpose it's been 467 00:34:13.562 --> 00:34:18.647 shoehorned into. Just because it's a big data set, it doesn't 468 00:34:18.731 --> 00:34:23.732 mean to say it's the right data set to use to train. And the 469 00:34:23.815 --> 00:34:28.734 other issue that we're seeing is, of course, the issue with 470 00:34:28.817 --> 00:34:34.235 rights holders, who said, "Well, hang on, our stuff got included, 471 00:34:34.319 --> 00:34:39.070 perhaps under copyright, exemptions, perhaps it shouldn't 472 00:34:39.153 --> 00:34:44.238 have ever been included, but our stuff got included in common 473 00:34:44.322 --> 00:34:49.240 crawl, but we made it clear on the access agreements on our 474 00:34:49.323 --> 00:34:53.991 website, or the terms and conditions etc. that we didn't 475 00:34:54.075 --> 00:34:59.159 allow that stuff to be used for commercial purposes, which it 476 00:34:59.243 --> 00:35:04.244 now appears to be doing. If we ask ChatGPT a question, we'll 477 00:35:04.328 --> 00:35:09.163 get some of our content back, or if we ask gen AI to print 478 00:35:09.246 --> 00:35:14.247 something in the style of a particular author, then we might 479 00:35:14.331 --> 00:35:19.416 find that that's been trained on stuff that it shouldn't have 480 00:35:19.499 --> 00:35:24.584 been trained on." And, we're going to get a lot of litigation 481 00:35:24.667 --> 00:35:29.419 on this, and this litigation is going to probably produce 482 00:35:29.502 --> 00:35:34.754 different results in different countries, because copyright law 483 00:35:34.837 --> 00:35:39.255 is different in different countries. And, at the same 484 00:35:39.338 --> 00:35:44.257 time, we're getting what you might call special pleading by 485 00:35:44.340 --> 00:35:49.425 the tech bros with some of the startups around the periphery. 486 00:35:49.508 --> 00:35:54.676 But, including people like Open AI, saying we're a special and 487 00:35:54.760 --> 00:35:59.678 conventional rules shouldn't apply to us, because AI is for 488 00:35:59.761 --> 00:36:05.013 the greater good. And hey, we're on a mission. And I'm not sure 489 00:36:05.096 --> 00:36:10.264 that that stacks up. Copyright law is a fundamental right. And 490 00:36:10.348 --> 00:36:15.516 if we start and say that people can rip off ISMG films and use 491 00:36:15.599 --> 00:36:20.518 them to train Sora, then the creativity of you and Tony and 492 00:36:20.601 --> 00:36:25.769 Mat is undermined. And, also the financial model disappears as 493 00:36:25.852 --> 00:36:30.937 well. So, I think we've got to be careful about enriching MIT 494 00:36:31.021 --> 00:36:36.106 graduates by accident by giving them a pass on laws that have 495 00:36:36.189 --> 00:36:41.190 existed for a long time, just because they're spinning, what 496 00:36:41.274 --> 00:36:45.775 some might say, is copyright theft, as something a bit 497 00:36:45.859 --> 00:36:51.027 different. You know, if I said "I'm in Trafalgar Square at the 498 00:36:51.110 --> 00:36:56.362 moment, I'm just going to walk into the square, and sell Adidas 499 00:36:56.445 --> 00:37:01.613 T shirts, and I should have a special pass," because I'm going 500 00:37:01.697 --> 00:37:06.615 to give some of the money to humanity. I think I'd probably 501 00:37:06.698 --> 00:37:11.366 still be arrested. I think I'd probably still be sued by 502 00:37:11.450 --> 00:37:16.535 Adidas. Why is it different? Because I, you know, met up with 503 00:37:16.618 --> 00:37:21.870 somebody in the dorm in MIT and came up with a wizzy tech idea. 504 00:37:21.000 --> 00:37:25.980 Anna Delaney: Well, thank you so much for the thorough answer 505 00:37:25.980 --> 00:37:29.580 there. Jonathan, you've brought a lot of clarity to a rather 506 00:37:29.580 --> 00:37:33.030 complex issue. So, thank you. And before we wrap up, just for 507 00:37:33.030 --> 00:37:35.430 fun, we have one final question for you. But, Jonathan, maybe 508 00:37:35.430 --> 00:37:39.990 you've deserved a pause a break or well earned. If a genie 509 00:37:40.020 --> 00:37:43.650 granted you one legal cybersecurity wish, what would 510 00:37:43.650 --> 00:37:50.430 it be? Not an illegal, a legal one. So, think Aladdin's cave at 511 00:37:50.430 --> 00:37:52.920 the moment. Mathew and Tony you want to jump in? 512 00:37:53.440 --> 00:37:57.190 Mathew Schwartz: Sure. Yeah. A llegal, not an illegal wish, 513 00:37:57.370 --> 00:38:04.630 right? I would hold ransomware threat actors - just to use that 514 00:38:05.140 --> 00:38:10.870 cybersecurity buzzy term - to account, even if they lived in 515 00:38:10.870 --> 00:38:14.800 Russia. So, I think the most direct route there might be to 516 00:38:14.800 --> 00:38:19.720 have Russia extradited citizens to face crimes allegedly 517 00:38:19.720 --> 00:38:20.500 committed abroad. 518 00:38:21.610 --> 00:38:22.240 Anna Delaney: Love that. 519 00:38:22.960 --> 00:38:23.380 Jonathan Armstrong: Good one! 520 00:38:23.680 --> 00:38:24.910 Anna Delaney: What are the chances of that happening 521 00:38:24.910 --> 00:38:29.020 Johnathan? We'll see we can only imagine. 522 00:38:30.160 --> 00:38:32.320 Jonathan Armstrong: I'll give you a souvenir Cordery pen if it 523 00:38:32.320 --> 00:38:32.920 happens. 524 00:38:33.280 --> 00:38:33.790 Anna Delaney: Okay. 525 00:38:34.200 --> 00:38:35.250 Jonathan Armstrong: One of my last few. 526 00:38:35.460 --> 00:38:38.040 Anna Delaney: Put the bet on! Tony? 527 00:38:38.880 --> 00:38:42.270 Tony Morbin: Yeah, I was just having a wish reminded me, I 528 00:38:42.270 --> 00:38:45.390 asked my then seven-year-old son what he would do with three 529 00:38:45.390 --> 00:38:48.750 wishes. And he answered, first one was to go to heaven when he 530 00:38:48.750 --> 00:38:50.700 died. I don't know where that came from, because we're not 531 00:38:50.700 --> 00:38:54.690 religious. Second was to be wise, so he'd know what to do 532 00:38:54.690 --> 00:38:58.020 with his third wish. So, I thought that was a really good 533 00:38:58.020 --> 00:39:01.950 one. So, given the ever evolving nature of cyber threats, you 534 00:39:01.950 --> 00:39:05.760 know, a single wish, won't necessarily cut it. So, I'd go 535 00:39:05.760 --> 00:39:09.390 for having the wisdom to always implement the best course of 536 00:39:09.390 --> 00:39:12.990 action available to prevent or react to future threats. So, 537 00:39:12.990 --> 00:39:14.610 basically, I'm stealing his wisdom one. 538 00:39:15.620 --> 00:39:17.750 Anna Delaney: Very good, best judges are that the 539 00:39:17.750 --> 00:39:21.710 seven-year-olds? I think so. Well done! that's a very wise 540 00:39:21.710 --> 00:39:26.840 answer. I am going for imagine if we could have one global 541 00:39:26.990 --> 00:39:30.170 universally accepted set of rules for cybersecurity and data 542 00:39:30.170 --> 00:39:33.980 privacy. So, just one playbook that everyone everywhere 543 00:39:33.980 --> 00:39:36.620 follows, and I think it just would make things so much 544 00:39:36.620 --> 00:39:41.660 simpler and for companies trying to protect data across all these 545 00:39:41.660 --> 00:39:42.530 different countries. 546 00:39:42.890 --> 00:39:44.120 Tony Morbin: As long as it's yours. 547 00:39:50.210 --> 00:39:51.830 Jonathan Armstrong: No disrespect to both of you but I 548 00:39:51.830 --> 00:39:54.470 don't think I'm going to be giving up my pen anytime soon to 549 00:39:54.470 --> 00:40:02.960 either of you. As I said my passion at the moment is getting 550 00:40:02.960 --> 00:40:07.160 more cybersecurity more tech skills onto boards. And, if I 551 00:40:07.160 --> 00:40:13.010 could wave cities magic wand, then that's what I'd be doing I 552 00:40:13.010 --> 00:40:18.380 think. I think it's invidious, isn't it that most boards take 553 00:40:18.380 --> 00:40:24.200 it as a given that they have somebody with robust accountancy 554 00:40:24.200 --> 00:40:28.460 qualifications to be there as the sounding board for the FD. 555 00:40:28.640 --> 00:40:31.430 It would be laughable if somebody suggested that there 556 00:40:31.430 --> 00:40:36.260 was a public company without somebody with financial or an 557 00:40:36.260 --> 00:40:40.880 audit background on the board. And yet, cybersecurity is a 558 00:40:40.880 --> 00:40:46.040 systemic risk to most organizations. And, we don't 559 00:40:46.040 --> 00:40:50.180 necessarily insist on and we sometimes don't value 560 00:40:51.020 --> 00:40:55.100 cybersecurity skills at a board level. That's got to change 561 00:40:55.100 --> 00:40:59.450 because whilst that doesn't change, then max threat actors 562 00:40:59.450 --> 00:41:03.440 could escape and run a merry dance. 563 00:41:04.590 --> 00:41:07.650 Anna Delaney: We love that answer. I think the genie will 564 00:41:07.650 --> 00:41:12.630 be certainly short of granular wishes. Well, Jonathan, we've 565 00:41:12.630 --> 00:41:15.570 really enjoyed this. Thank you so much for all the wealth of 566 00:41:15.570 --> 00:41:18.180 information and knowledge education that you shared. 567 00:41:18.600 --> 00:41:21.540 Please do join us again soon. Will you be back? 568 00:41:22.470 --> 00:41:25.200 Jonathan Armstrong: I'd love to! It's been a lot of fun. No, and 569 00:41:25.770 --> 00:41:26.910 yeah, very enjoyable. 570 00:41:27.240 --> 00:41:30.990 Anna Delaney: Thank you. And thanks, Tony and Mat. Thank you 571 00:41:30.990 --> 00:41:32.610 so much for watching. Until next time!