WEBVTT 1 00:00:06.060 --> 00:00:08.370 Tom Field: Hi there. I'm Tom Field. I'm senior vice president 2 00:00:08.370 --> 00:00:11.190 of editorial with Information Security Media Group. I'm 3 00:00:11.190 --> 00:00:14.100 delighted to be talking about API security. And even more 4 00:00:14.100 --> 00:00:16.860 delighted to be talking with Richard Bird, chief security 5 00:00:16.860 --> 00:00:20.550 officer with Traceable AI. Richard, it is a pleasure to see 6 00:00:20.550 --> 00:00:20.940 you again. 7 00:00:21.000 --> 00:00:22.350 Richard Bird: I always love hanging out and talking with 8 00:00:22.350 --> 00:00:22.680 you, Tom. 9 00:00:23.130 --> 00:00:25.470 Tom Field: API security - I would bring up this in 10 00:00:25.470 --> 00:00:28.230 conversation, maybe two years ago, three years ago. And it was 11 00:00:28.230 --> 00:00:31.350 something ...and the joke was always, if anyone gave you a 12 00:00:31.350 --> 00:00:33.720 number of how many APIs are there in the enterprise, you 13 00:00:33.720 --> 00:00:37.020 multiply that by 10. And you might get close. Here's security 14 00:00:37.020 --> 00:00:40.560 leaders talking about API security, talking like they're 15 00:00:40.560 --> 00:00:44.280 seriously addressing it. A question for you: is the needle 16 00:00:44.280 --> 00:00:45.540 moving positively? 17 00:00:46.110 --> 00:00:48.720 Richard Bird: So I think the needle is definitely moving in 18 00:00:48.720 --> 00:00:52.170 the right direction. I think there's two problems, the 19 00:00:52.170 --> 00:00:55.800 activity. When I just started a year ago, at Traceable AI, I 20 00:00:55.830 --> 00:00:58.620 heard many of the same things. API security, I don't have a 21 00:00:58.620 --> 00:01:01.830 problem. I've got a WAF, I got a gateway. API security; I'm going 22 00:01:01.830 --> 00:01:06.090 to get to it. And now a year later, I hear a lot of ideation, 23 00:01:06.120 --> 00:01:11.520 a lot of strategy conversations about 'we need to do something, 24 00:01:11.580 --> 00:01:15.180 what are we going to do?' I think when we look at really 25 00:01:15.180 --> 00:01:19.650 tangible moves on the needle, we're seeing it in highly 26 00:01:19.650 --> 00:01:22.500 regulated industries, specifically banking, right? And 27 00:01:22.500 --> 00:01:27.870 there's no banker in tech. You know, it sometimes sounds a 28 00:01:27.870 --> 00:01:31.290 little frustrating, you know, coming from bankers to say that 29 00:01:31.320 --> 00:01:34.260 financial institutions lead the way, but it is the truth. Right? 30 00:01:34.260 --> 00:01:38.610 They're the honeypot of the most valuable economic gain for the 31 00:01:38.610 --> 00:01:43.110 bad actors. So within the highly regulators, we're seeing heads 32 00:01:43.110 --> 00:01:48.120 of API security being stood up, we're seeing API security being 33 00:01:48.120 --> 00:01:51.240 talked about, in terms of which part of the organization should 34 00:01:51.240 --> 00:01:53.910 it be aligned to from a leadership standpoint, so 35 00:01:53.910 --> 00:01:56.190 definitely seeing movement, but we get out of the highly 36 00:01:56.190 --> 00:01:59.550 regulated, and we're really seeing people kind of in the 37 00:01:59.550 --> 00:02:03.300 thinking, pondering and data collection state. And yet we're 38 00:02:03.300 --> 00:02:08.340 seeing even more and more exploits against the API attack 39 00:02:08.340 --> 00:02:11.640 surface that would suggest that people need to be thinking and 40 00:02:11.640 --> 00:02:13.620 moving and collecting data a little bit faster. 41 00:02:13.000 --> 00:02:15.460 Tom Field: Well, that was my next question. So let's go to 42 00:02:15.460 --> 00:02:17.260 the other side of the battlefield. How would the 43 00:02:17.260 --> 00:02:20.620 cybercriminals take advantage of APIs as an attack vector? 44 00:02:20.000 --> 00:02:23.030 Richard Bird: Yeah, you know, recently there was a 45 00:02:23.030 --> 00:02:26.900 conversation that I had about, you know, how hard it is for 46 00:02:27.050 --> 00:02:30.590 large enterprise security organizations to kind of shift 47 00:02:30.620 --> 00:02:35.480 the direction of the ship. Over 20+ years of this, what I call 48 00:02:35.480 --> 00:02:39.110 notionally the cybersecurity industrial complex, right? Now, 49 00:02:39.110 --> 00:02:42.620 cybersecurity organizations have entrenched budgets and 50 00:02:42.620 --> 00:02:46.760 entrenched organizational structures and leadership. And 51 00:02:47.030 --> 00:02:49.430 the comparison I've made recently, as it relates to the 52 00:02:49.430 --> 00:02:56.630 bad actors, is that we're trying to fight a war against, you 53 00:02:56.630 --> 00:03:02.270 know, a guerrilla army that has very small, you know, groups and 54 00:03:02.270 --> 00:03:06.740 units, you know, of people that can move quickly. And we're 55 00:03:06.740 --> 00:03:10.040 trying to fight them with an entire brigade, right? And 56 00:03:10.040 --> 00:03:12.860 everybody knows, you know, Special Forces is much easier to 57 00:03:12.860 --> 00:03:16.820 move, you know, a unit than it is to move an entire army. And 58 00:03:16.820 --> 00:03:20.270 that's really what we're seeing is that there's a huge amount 59 00:03:20.270 --> 00:03:25.820 of, you know, inertia and friction, to try and orient your 60 00:03:25.850 --> 00:03:31.040 organization towards solving for API issues. And yet, the bad 61 00:03:31.040 --> 00:03:34.190 actors are moving extremely quickly and discovering even 62 00:03:34.430 --> 00:03:37.880 more interesting, new ways to leverage APIs to do bad things. 63 00:03:38.180 --> 00:03:40.790 And there's one really important finding that I've had in the 64 00:03:40.790 --> 00:03:45.500 last, I'd say, 90 days, that really took me aback that, you 65 00:03:45.500 --> 00:03:48.680 know, I always like to say, like, you know, somebody is a 66 00:03:48.680 --> 00:03:51.890 hardcore security practitioner, when there are hacks that happen 67 00:03:51.920 --> 00:03:54.050 and you go, that's really scary. But man, that's really cool. 68 00:03:55.070 --> 00:03:58.220 Right? I mean, we tend to, on the good guy side of the 69 00:03:58.220 --> 00:04:02.270 equation, still admire stuff that may rise to that level of 70 00:04:02.270 --> 00:04:03.770 sophistication or innovation. 71 00:04:03.770 --> 00:04:04.910 Tom Field: SolarWinds, right?. 72 00:04:04.000 --> 00:04:08.410 Tom Field: Okay, I want to follow up on that, because you 73 00:04:04.540 --> 00:04:07.817 Richard Bird: Yeah. But what I saw recently is bad actors are 74 00:04:07.886 --> 00:04:11.582 using APIs to conduct combination attacks in the same 75 00:04:08.440 --> 00:04:10.480 pointed out to me that the threat actors are now using 76 00:04:11.652 --> 00:04:15.417 campaign against a single target, meaning that they're 77 00:04:14.140 --> 00:04:22.990 bots, residential proxies and patterns, authentication, 78 00:04:15.487 --> 00:04:19.810 using API volumetric attacks, API application DoS attacks, and 79 00:04:19.880 --> 00:04:24.063 then API, fraudulent account creation, three or four or five 80 00:04:24.133 --> 00:04:28.038 different methods. And when you think about how security 81 00:04:28.108 --> 00:04:31.943 organizations are structured over the last 20 years, we 82 00:04:32.012 --> 00:04:36.335 actually are almost singularly focused on a plane of attack or 83 00:04:36.405 --> 00:04:40.798 a point of attack. So this idea of bad actors being able to use 84 00:04:40.868 --> 00:04:45.400 four, five, six, seven different types of attacks in one campaign 85 00:04:45.470 --> 00:04:49.653 and obfuscating or creating a diversion, and then being able 86 00:04:49.723 --> 00:04:54.116 to commit the actual crime that they're trying to execute under 87 00:04:54.186 --> 00:04:58.369 the covers. That's shocking, right? And I think that's going 88 00:04:58.439 --> 00:05:02.692 to be a really big wake up call for most organizations in the 89 00:05:02.762 --> 00:05:03.460 next year. 90 00:05:10.510 --> 00:05:14.620 authorization, understanding of vulnerabilities, all to conduct 91 00:05:14.620 --> 00:05:18.970 fraud. Where does that intersect with security teams and their 92 00:05:18.970 --> 00:05:21.310 responsibilities because fraud and security don't necessarily 93 00:05:21.310 --> 00:05:21.970 speak all the time. 94 00:05:21.570 --> 00:05:23.955 Richard Bird: That is an outstanding observation. Because 95 00:05:24.010 --> 00:05:27.061 it's even more complicated than that. Right? I'm really 96 00:05:27.116 --> 00:05:30.278 fortunate. A number of other companies recently that have 97 00:05:30.334 --> 00:05:33.329 been developed over the last couple of years, that are 98 00:05:33.384 --> 00:05:36.657 specifically in kind of the traditional fraud space, I have 99 00:05:36.712 --> 00:05:39.652 friends and colleagues and former bosses that are now 100 00:05:39.708 --> 00:05:43.091 running those companies, so I get an opportunity to talk with 101 00:05:43.147 --> 00:05:46.641 them. And for me, coming out of you know, 17, some odd years of 102 00:05:46.697 --> 00:05:50.080 banking, I maintain my contacts and kind of the classic fraud 103 00:05:50.136 --> 00:05:53.741 organizations. But it's not just the security and fraud, haven't, 104 00:05:53.797 --> 00:05:57.236 you know, historically, kind of come together. We've seen some 105 00:05:57.291 --> 00:06:00.786 movement in that in the creation of chief security officers and 106 00:06:00.841 --> 00:06:04.003 then bringing fraud under. But that's relatively limited. 107 00:06:04.058 --> 00:06:07.165 That's very advanced large enterprises. But there's even 108 00:06:07.220 --> 00:06:10.437 more complications in that; fraud doesn't just touch fraud 109 00:06:10.493 --> 00:06:13.987 organizations, it doesn't just touch security organizations, it 110 00:06:14.043 --> 00:06:16.927 has implications for legal organizations around data 111 00:06:16.983 --> 00:06:19.978 privacy, right? It has implications for compliance, as 112 00:06:20.033 --> 00:06:23.361 we've seen with the Optus or probably the Medibank breach in 113 00:06:23.417 --> 00:06:26.967 Australia, where the government actually had to step in and say, 114 00:06:27.022 --> 00:06:30.240 'we're going to have to do loss limits on the laws that we 115 00:06:30.295 --> 00:06:33.568 passed on how much you can be fined.' Because the amount of 116 00:06:33.623 --> 00:06:36.896 customer data that got access was so massive, like Medibank 117 00:06:36.951 --> 00:06:40.390 will be out of business. So you know, kind of the mechanics of 118 00:06:40.446 --> 00:06:43.663 all of the siloed functions within organizations that have 119 00:06:43.718 --> 00:06:47.157 never really kind of integrated together, are now faced with a 120 00:06:47.213 --> 00:06:50.097 world where all of these business functions are knit 121 00:06:50.153 --> 00:06:53.703 together with APIs. So you know, that lack of coordination right 122 00:06:53.758 --> 00:06:56.643 now is a capitalizable opportunity for the bad guys. 123 00:06:54.360 --> 00:07:33.120 Tom Field: So bring it back to one of your favorite topics - 124 00:06:56.698 --> 00:07:00.026 Right? You know, the more that we're all standing around the 125 00:07:00.082 --> 00:07:03.354 old proverb of 'the six blind men and the elephant,' right, 126 00:07:03.410 --> 00:07:06.793 the more that we're standing around, and fraud is going well. 127 00:07:06.849 --> 00:07:10.399 That's not really fraud, because it didn't, you know, involve a, 128 00:07:10.454 --> 00:07:13.782 you know, magnetic code swipe, or, you know, a copper plate. 129 00:07:13.838 --> 00:07:17.000 And, you know, the security people are going well, frauds 130 00:07:17.055 --> 00:07:20.383 not really our business, that's the fraud guys. The bad guys 131 00:07:20.439 --> 00:07:23.434 love that, because it's a bunch of who's on first base 132 00:07:23.489 --> 00:07:26.928 conversations, right? And we're definitely seeing that type of 133 00:07:26.984 --> 00:07:30.312 confusion happening when these API breaches and exploits are 134 00:07:30.367 --> 00:07:31.200 being executed. 135 00:07:33.150 --> 00:07:36.840 identity. What needs to happen now at the identity layer, to be 136 00:07:36.840 --> 00:07:41.610 able to prevent fraud, and bolster one security posture? 137 00:07:42.050 --> 00:07:44.663 Richard Bird: So I'm a bit of two minds on that question. 138 00:07:44.724 --> 00:07:48.189 Because I think there's a certain amount of me that is in 139 00:07:48.249 --> 00:07:52.139 the wait and see mode. But that wait and see mode is really kind 140 00:07:52.200 --> 00:07:55.847 of also aligned to this big change that I made moving to API 141 00:07:55.908 --> 00:07:59.616 security, I'm always kind of like to say, you know, identity, 142 00:07:59.676 --> 00:08:03.506 softer side of security, softer side of seers, right? You know, 143 00:08:03.567 --> 00:08:07.335 identity is the gateway for the human element to interact with 144 00:08:07.396 --> 00:08:11.104 the digital world. So there's a lot of emotion, a lot of, you 145 00:08:11.164 --> 00:08:14.750 know, tensions and frictions that are tied up in how do you 146 00:08:14.811 --> 00:08:18.397 execute an identity strategy. Historically, though, for all 147 00:08:18.458 --> 00:08:22.166 the achievements that we had in the identity space to finally 148 00:08:22.227 --> 00:08:25.509 get single sign-on, and federation, and authentication 149 00:08:25.570 --> 00:08:28.670 in place, we distributed authorization to the wind, 150 00:08:28.730 --> 00:08:32.013 actually to the same application developers that we've 151 00:08:32.073 --> 00:08:35.660 distributed APIs to. And the reason why I have two minds on 152 00:08:35.720 --> 00:08:39.671 kind of progressing what we need to do an identity in the spaces, 153 00:08:39.732 --> 00:08:43.440 because I do believe, and I'm saying this cautiously, because 154 00:08:43.501 --> 00:08:46.904 I am not often can be confused with an optimist. But I'm 155 00:08:46.965 --> 00:08:50.551 excited about the possibility of us finally, achieving fine 156 00:08:50.612 --> 00:08:54.077 grained access control. When people say like, why did you 157 00:08:54.138 --> 00:08:58.028 leave identity? I'm always like, no, I went to API security. And 158 00:08:58.088 --> 00:09:01.796 that's going to be identity in the next 24 months. Right? And 159 00:09:01.857 --> 00:09:05.139 more and more security practitioners are agreeing with 160 00:09:05.200 --> 00:09:08.968 me on that observation. What I'm thinking is possible, is this 161 00:09:09.029 --> 00:09:12.190 idea of finally putting a security set of guardrails 162 00:09:12.251 --> 00:09:15.715 around authorization, where all of the access not just to 163 00:09:15.776 --> 00:09:19.484 exponential business value is, but to discrete access by, you 164 00:09:19.545 --> 00:09:22.766 know, your mom, your dad, your grandparents, to these 165 00:09:22.827 --> 00:09:26.170 particular bank accounts, or these particular pieces of 166 00:09:26.231 --> 00:09:29.452 information, and then being able to secure those post 167 00:09:29.513 --> 00:09:33.160 authentication. The implications for a more secure world are 168 00:09:33.221 --> 00:09:37.111 really big. Here's the thing. We just can't screw it up, like we 169 00:09:37.172 --> 00:09:40.636 screwed up all of the preceding things that we've done in 170 00:09:40.697 --> 00:09:44.101 security, right where we finally spent years sorting out 171 00:09:44.162 --> 00:09:47.869 authentication, and then you authenticate once and get access 172 00:09:47.930 --> 00:09:51.638 to everything, meaning all I got to do is be you and get your 173 00:09:51.699 --> 00:09:55.467 stuff. Right. We can't end up with the same type of, you know, 174 00:09:55.528 --> 00:09:59.053 poor decisions being made, relative to securitizing in the 175 00:09:59.114 --> 00:10:02.579 authorization plane, because we finally have a technology 176 00:10:02.639 --> 00:10:06.530 solution that can do it. So let's try and do it right this time. 177 00:10:06.000 --> 00:10:11.370 Tom Field: Back to API security. Let's set aside the whole 178 00:10:11.370 --> 00:10:13.590 challenge of knowing how many APIs are in your enterprise. We 179 00:10:13.590 --> 00:10:17.490 know that's an issue. What are the other API security gaps you 180 00:10:17.490 --> 00:10:19.650 commonly see organizations trying to fill them? 181 00:10:20.650 --> 00:10:24.760 Richard Bird: Well, one is a gap in, I guess, functional 182 00:10:24.760 --> 00:10:30.670 knowledge about how APIs are exploited. Beautiful thing about 183 00:10:30.670 --> 00:10:35.890 APIs; they are coded in such a way that you can divine what it 184 00:10:35.890 --> 00:10:39.370 is that we're built for. This allows us to do things like 185 00:10:39.400 --> 00:10:44.230 create automated documentation, because for the last 30 years, 186 00:10:45.100 --> 00:10:49.000 you know, more than actually, application developers notating 187 00:10:49.000 --> 00:10:52.180 their code has been a substantial problem. So being 188 00:10:52.180 --> 00:10:56.440 able to use the design specs of the actual read of that API to 189 00:10:56.440 --> 00:11:00.490 create documentation, we now know what it's supposed to do. 190 00:11:01.120 --> 00:11:03.520 The reason why that's so important and why there's a 191 00:11:03.520 --> 00:11:08.140 knowledge gap on how API exploits work is that bad actors 192 00:11:08.140 --> 00:11:14.050 are incredibly clever at making minor adjustments to those APIs 193 00:11:14.470 --> 00:11:18.670 over the course of a few days, of course of a few weeks, a 194 00:11:18.670 --> 00:11:21.640 couple months, this idea of a low and slow attack. And without 195 00:11:21.640 --> 00:11:25.630 a normative baseline capability to compare what an API should be 196 00:11:25.630 --> 00:11:30.010 doing to what is now being used for is what leads directly to 197 00:11:30.040 --> 00:11:32.890 the claw hammer analogy, which that which I've used frequently, 198 00:11:32.890 --> 00:11:35.950 which is the people that designed to hammer, a 199 00:11:35.950 --> 00:11:38.590 carpenter's hammer designed it for two functions, to drive 200 00:11:38.590 --> 00:11:44.080 nails and to pull out nails. Nobody ever expects a hammer to 201 00:11:44.080 --> 00:11:46.480 be used for any of those other purposes until the news 202 00:11:46.480 --> 00:11:49.720 broadcasts that somebody used one as a murder weapon. Now that 203 00:11:49.720 --> 00:11:53.410 is a really dark analogy. But the truth is that a hammer was 204 00:11:53.410 --> 00:11:57.160 designed to do a thing however it can be used to do other 205 00:11:57.160 --> 00:12:00.820 things that was never intentionally designed for. And 206 00:12:00.820 --> 00:12:04.990 that kind of knowledge gap leads to arguments internally within 207 00:12:04.990 --> 00:12:09.220 enterprises about, well, API security is AppSec. Well, but I 208 00:12:09.220 --> 00:12:13.420 can do an app DoS attack. And I'm not an app vulnerability 209 00:12:13.420 --> 00:12:16.240 when I'm trying to clobber your ... and they go, Oh, wait, I 210 00:12:16.240 --> 00:12:18.610 didn't know you could use the API for that. So I think that 211 00:12:18.610 --> 00:12:26.650 this is on the solutions industry to help the enterprise 212 00:12:26.650 --> 00:12:32.290 world understand this reality of abuse cases versus use cases, 213 00:12:32.560 --> 00:12:36.190 and they can be encompassed or embodied within same API. And 214 00:12:36.190 --> 00:12:38.620 that is a knowledge gap that needs to be closed extremely 215 00:12:38.620 --> 00:12:43.630 quickly. And that may also make improvements on moving that 216 00:12:43.630 --> 00:12:46.900 needle that we talked about at the beginning. Because as people 217 00:12:46.900 --> 00:12:51.940 understand this characteristic of APIs, they'll begin to 218 00:12:51.940 --> 00:12:54.850 recognize the threat and risk if they really are within their 219 00:12:54.850 --> 00:12:55.750 organizations. 220 00:12:55.780 --> 00:12:58.390 Tom Field: Richard, what are you doing in your new role to help 221 00:12:58.390 --> 00:13:00.250 advance the cause of API security? 222 00:13:01.230 --> 00:13:05.460 Richard Bird: Well, I think it's probably, you know, a repeat 223 00:13:05.460 --> 00:13:08.670 Nappa vacation and what I did, you know, kind of being a 224 00:13:08.670 --> 00:13:12.360 champion for identity, which I haven't stopped doing, by the 225 00:13:12.360 --> 00:13:16.650 way. Now, I just try and split my work between two, you know, 226 00:13:16.680 --> 00:13:22.860 Don Quixote causes. But I spend a lot of time now speaking with 227 00:13:23.460 --> 00:13:27.090 a much more broader and interesting market. An example 228 00:13:27.090 --> 00:13:32.580 being I've recently been asked to be a part of technology and 229 00:13:32.580 --> 00:13:36.120 economic trade missions, to a number of different countries. 230 00:13:36.960 --> 00:13:41.040 And that gives ... sure, it's fun to travel. But the reality 231 00:13:41.040 --> 00:13:45.480 is it's really fun to have conversations with people where 232 00:13:45.750 --> 00:13:49.530 maybe some of these security issues are manifesting in 233 00:13:49.530 --> 00:13:52.590 different ways within their countries, or their legal 234 00:13:52.590 --> 00:13:56.310 structures. You know, so there's a lot of that activity going on. 235 00:13:56.340 --> 00:14:00.300 And for me, it's still keynotes and speaking and, you know, 236 00:14:00.300 --> 00:14:04.590 being a voice, and not, you know, I think, ultimately, you 237 00:14:04.590 --> 00:14:08.310 know, certainly my relationship with ISMG over the years, you 238 00:14:08.310 --> 00:14:11.760 know, not, you know, kind of simply repeating the company 239 00:14:11.760 --> 00:14:17.100 line, but being willing to be diplomatically contentious and 240 00:14:17.700 --> 00:14:23.070 try and move the conversation into healthy debate with a 241 00:14:23.070 --> 00:14:25.920 recognition that what we've been doing, regardless of the 242 00:14:25.920 --> 00:14:29.340 security domain API, or identity, or threat and 243 00:14:29.340 --> 00:14:32.730 vulnerability management, that the scoreboard is clearly 244 00:14:32.730 --> 00:14:36.930 showing that we're not getting it right yet. Right? So let's be 245 00:14:36.930 --> 00:14:38.760 honest with each other, that we're not getting it right. And 246 00:14:38.760 --> 00:14:41.280 let's have an honest dialogue and debate about how do we 247 00:14:41.280 --> 00:14:43.830 improve it. And so that's really what I'm staying focused on. 248 00:14:44.190 --> 00:14:46.020 Tom Field: Diplomatically contentious. That's our next 249 00:14:46.020 --> 00:14:49.230 topic. Richard, thank you so much for your time, appreciate 250 00:14:49.230 --> 00:14:49.350 it. 251 00:14:49.380 --> 00:14:51.180 Richard Bird: No, not a problem. Always enjoy it. 252 00:14:51.900 --> 00:14:53.760 Tom Field: Again, the topics been API security. We just heard 253 00:14:53.760 --> 00:14:56.280 from Richard Bird. He's the chief security officer with 254 00:14:56.280 --> 00:15:00.240 Traceable AI. For Information Security Media Group, I'm Tom 255 00:15:00.240 --> 00:15:02.670 Field, thank you for giving us your time and attention.