"An attacker just needs to find one weakness or one person who makes a mistake or one poorly set up system or one way to get in that nobody ever would have thought of," says Deborah Frincke, the top computer scientist at the Pacific Northwest National Laboratory, in an interview with GovInfoSecurity.com (full transcript below). "It just take one creative approach or one accident to get into a system, and the defender has to get it right all the time, 24/7, and so being proactive isn't as easy as it sounds.
"Most of our difficulties nowadays are that we keep trying to patch existing systems and the nature of them, it is so hard to keep up with the patches, that it is has been hard to get investment in those longer range approaches, which are really what are required if we are going to change the current status quo."
With a modest $2 million budget, researchers at Pacific Northwest are taking a four-prong approach to provide the alternative to that patch mentality, research the disciplines of predicted offense, adaptive systems, cyber analytics and trustworthy engineering.
Frincke spoke with Eric Chabrow, managing editor of GovInfoSecurity.com.
(A summary of the lab's R&D activities can be found here: i4.pnl.gov.)
ERIC CHABROW: Tell us a bit about the cybersecurity research program at Pacific Northwest National Lab.
DEBORAH FRINKE: The cybersecurity research program at the Pacific Northwest National Lab is actually divided into several different pieces and I will start with the ones that I am most familiar with and that is the one that involves our open research.
What we have is an internal investment of about $2 million dollars a year in security that is intended to move the nation, the Department of Energy in particular, but also the nation from a catch and touch mentality to more of a proactive prepare and prevent mentality. And, so we are looking at activities that will help us allow systems to be safer and more secure by using a more proactive approach.
CHABROW: That's interesting you mention proactive; I was speaking with a Cornell University professor who, in recent testimony before Congress, said that the problem that we face today in cybersecurity is that we tend to be too reactive, that we patch things and that we do not think as the attackers think. Is that quite a challenge to try and devise methods to think like an attacker?
FRINKE: Yes it is. The hard part is that, of course, an attacker just needs to find one weakness or one person who makes a mistake or one poorly set up system or one way to get in that nobody ever would have thought of, like the folks that break into garage doors by using antifreeze on the locks. It just take one creative approach or one accident to get into a system, and the defender has to get it right all the time, 24/7, and so being proactive isn't as easy as it sounds.
But I agree with you other speaker, most of our difficulties nowadays are that we keep trying to patch existing systems and the nature of them, it is so hard to keep up with the patches, that it is has been hard to get investment in those longer range approaches, which are really what are required if we are going to change the current status quo.
CHABROW: Is there a way to explain how to be proactive, how thinking has to change to be able to come up with the ideas to not just react?
FRINKE: One needs to make changes on several levels. First, the mindset has to change about security. Many modern systems, probably most, were not designed with security and usable security data in mind. Even if security is added up front, it tends to be so difficult for the users that they don't like to use it. Like passwords that are so long that you have to write them all down and stick them on a sticky note on your computer.
From the beginning, from the very design stage, one needs to consider security that is usable by the community that it intends to make the system more secure. The second thing a person has to do is to think more carefully about what it is that you are going to secure a computer against.
I don't believe that anyone thinks that it is going to be possible to think through all possible attacks that can happen to a computer, that is not what we are proposing. What we are proposing is a more thoughtful design assistance, so that if you know a computer that has never been worked in a network is going to be connected to a network, there is a lot of attacks that we understand that are coming across the network and can take a look at the defenses of that individual computer to see if we can meet those challenges.
Similarly, if an organization suddenly changes its business model and becomes more attractive to identity theft or it's trying to protect something more valuable, it needs to look at its defenses too against the well-known attacks so a lot of it sounds like common sense that we tend not to invest in up front.
CHABROW: At your lab, how do you identify what areas of cybersecurity research to pursue?
FRINKE: What we do at our lab, when we are thinking about which research areas to pursue, is begin with our primary sponsor, the Department of Energy, and for instance in the research program that I am leading now, what we did is we thought about what are all of the research activities that other agencies and other national labs were engaging in.
I actually set up a little table of those things and then thought about what if all of those research projects are successful? What's left that is most important? Coming back to this goal of a more safer and more security Department of Energy and nation and a proactive approach, and we began with those, and I identified four different areas of investment and those are the ones that we are pursuing with our internal investments now.
CHABROW: What are those areas?
FRINKE: We call them predicted offense, adaptive systems, cyber analytics and trustworthy engineering.
Predicted offense is the act of taking prediction, or thinking ahead and tying that to defense actions. What we often find is that people will pursue what they think of a situational awareness or situational understanding so you may understand what is about to go wrong, but if you don't think about the defensive actions you plan to take at the same time, where you are looking for at faults in advance, all you are going to get, as I like to say, is a headache, and you are not going to be able to achieve anything that's new. So predicted offense is the act of thinking ahead and thinking about which actions you can take.
Adaptive systems are tied to the study of cybersecurity that let's you react flexibly and not necessarily with a human directly in the loop. Nowadays, the cyber threat is so fast that when an attacker discovers or decides to utilize a flaw in a system, it can begin in one place and circle the globe in less than 10 minutes and that has been shown by many researchers that the worms in particular in the botnet can travel very, very rapidly. And, so if you are going to combat a threat of that type, you need to have systems that are resilient, as we like to say, that can adapt to the threat in such a way that they can respond to these very rapid moving threats. Still, with human guidance but the guidance often has to be in advance in terms of thinking ahead what you would like a system to do in face of a threat.
The third area is cyber analytics and that is one we have just begun to make some investment in our research for cybersecurity. Cyber analytics is, just broadly speaking, the collection of all the different activities that you need to take if you want to understand what is going on in your environment. Cyber analytics includes looking at traditional network traffic to see if you can identify a new threat, but it also involves taking advantage of nontraditional data, information perhaps about what is going on in the world. Are two nations potentially at war? You might want to look a little differently then if you see interactions between those countries.
Trustworthy engineering, now we haven't directly invested in that through the research, but it is an area we hope to expand in and the idea behind trustworthy engineering is to start to look at all the different elements of a system, from three perspectives. One, how can you make them operate in a trustworthy manner even if you can't trust every element of them. So, in other words, an opponent has gained some access to your system, what parts of it can you still trust? The second part of it has to do with making it resilient, in ways that might not involve adaptation and the third has to do with evaluation or scientifically valuable security and that is one that I think we will begin with first.
CHABROW: Some of the solutions I hear from you and other IT security experts go back to the very basic structure of the infrastructure being designed, the enterprise architecture. Obviously, if you could start building in security there, things are better off but we are living in a legacy world. How do you balance those two?
FRINKE: Well, I agree with you. I am wholeheartedly sympathetic with a clean slate approach and it would be much nicer if we could begin by designing all systems to be secure from the onset and have everybody operate them properly, according to those principles, but of course if we could count on that, we wouldn't have people breaking in at all and so it would be a moot point.
What we see as important is to begin to understand what it is you can trust about a system and then build out from there and that is where our fourth bullet starts to come into play. Which parts of a system can you build to be trustworthy, whether it is the software or the hardware, and then once you have a firm foundation what things can you reply upon from that starting point?
The hope is of course that if you begin to infuse security and if this becomes a market advantage or if it becomes an advantage to a state that is seeking to protect itself, thinking of a nation state here, then people begin to use those more secure components. But I think we do have to assume that there will always be some elements of a system that are not trustworthy, both the people and the components will be working from infusion and trying to make our security as usable and as secure as possible, but not assume that everything will be working perfectly the way that we planned.
CHABROW: How basic is research conducted at your labs or are you looking for specific solutions?
FRINKE: Something I like about our laboratory is that we go from the very most basic research onto some of the most applied research that you could name, and implementation and development as well.
On the basic side we are looking at some of the mathematical principles behind predictions; what are the mathematics that it takes to do prediction. We are looking at, what are the mathematics one might use to bring to bear an evaluation question? If you were comparing two very large infrastructures, one to another, and trying to decide whether one was more secure than the other, how is it that you would do that. One way is to take a look at a mathematical set and do some comparisons at that level.
On the other side, and not necessarily in my project but in the people that we have that work with the energy grid, we have folks that are trying to look at smart chargers and decide how to make those, implement them so that they are more secure, and they are looking at some fundamental protocols and also how to do better jobs at building software, so that is more applied research.
Then we have what I like to call the field researchers and those are the people that take a look at what is really going out there in the wild world of cybersecurity and observing the networks to see what kinds of malicious threats are going on and that is also research and can also be either open or closed and tends to be more applied.