6 Ways to Reform FISMA Without New Law

IGs, Agencies Need to Agree on Compliance Terms

Former OMB leaders Karen Evans and Franklin Reeder are trying to determine how to improve evaluation of information security within U.S. federal government agencies in an effort to correct current flaws.

In a paper co-authored for SafeGov, Measuring What Matters: Reducing Risk by Rethinking How We Evaluate Cybersecurity, Reeder, a onetime top executive in the White House Office of Management and Budget, and Evans, the government's former top IT officer, offer steps to get inspectors general and agencies on the same page in regards to audits and information risk management.

"If you run a FISMA evaluation today and if an IG runs it today, there really is no prioritization of the findings," says Evans in an interview with Information Security Media Group [transcript below].

Adds Reeder: "The problem we're seeking to address is we hear increasingly that the combination of policy guidance and audit reports is causing the limited resources we have to protect our cyber systems to be spent on things that don't reduce risk."

The Federal Information Security Management Act, the 11-year-old law that governs federal government IT security, requires auditors to verify a check-list approach to IT security, which IT and IT security managers often complain doesn't recognize the steps they take to secure their agencies' digital assets.

"[IGs] can go to the [National Institute of Standards and Technology] publications and they can run an evaluation, as long as they generate the numbers that OMB talks about," Evans says. "That means an agency could end up with 500 findings."

With those findings, a chief information officer within the agency could review them and indicate 495 that can be addressed given their current budget.

"[They] go through, fix them all, and it looks like you only have five left," Evans says. "Except those five could be the five that are actually 90 percent of your problem. They're the hardest ones to do. ... We're trying to change that dynamic of what's being measured."

In an interview on the paper, Evans and Reeder:

  • How their approach differs from FISMA;
  • Why audits deal with compliance, and not cybersecurity;
  • How they're working on getting their recommendations adopted.

Evans heads the U.S. Cyber Challenge, a program to encourage high school students through college graduates to explore careers in information security. She served as administrator for IT and e-government, a position that's now known as federal chief information officer, in the George W. Bush White House. Earlier in her career, Evans was the CIO at the Department of Energy.

Reeder, a founder and director of the Center for Internet Security, worked at OMB for more than 20 years, where he was chief of information policy, deputy associate director for veterans affairs and personnel and assistant director for general management. He also served as director of the White House Office of Administration.

Chief Operating Officer Julie Anderson and Senior Associate Meghan Wareham of Civitas Group co-authored with Evans and Reeder the SafeGov report.

Rethinking Cybersecurity Evaluations

ERIC CHABROW: We'll get to the details in a moment, but what's the elevator pitch on the paper?

FRANKLIN REEDER: [I'm] glad you asked because we've been practicing it, trying to figure out how we explain what is a very dense subject to smart folks who may not be immersed in it. The problem we're seeking to address is we hear increasingly that the combination of policy guidance and audit reports is causing the limited resources we have to protect our cyber systems to be spent on things that don't reduce risk. Rather than simply complain about that, we decided after some work that we had done earlier in the fall to attempt to develop a framework that would actually give both auditors and practitioners a basis for having a constructive conversation.

The work really is in three pieces. First, we reintroduce or emphasis the concept of a secure baseline, that is, things that every organization ought to do before even beginning to develop a risk framework. Think of it in terms of managing your own health. You should eat properly, drive safely, engage in safe sex and have some basic immunizations even if you're healthy and not particularly at risk. The second level of the work then encourages agencies to develop what we call an organizational cyber-risk management framework, which then says, "What are the things that the threats tell us and the characteristics of what we do tell us about how we may be particularly at risk in the sense that, for example if you're running a nuclear power plant, you have a different set of concerns than if you're running a payroll system?"

The risk management framework breaks down into nine areas of potential risk, and what we've encouraged organizations to do then is to develop a framework that allows them to develop a much more detailed plan that can actually be measured. Think of it from the public-health perspective. Higher risk populations you might want to do more immunizations; you might want to have more frequent examinations; you might want to run far more detailed and even extensive tests than you do for the general population.

The last level of the proposal that we put forward is the development of a risk-management indicator, which is really a roll-up of the status of each of the areas identified in the framework. If you think of the health metaphor, we all manage weight and blood pressure, but we take an overall look at our health and, if we're particularly at risk, we may be watching much more detailed indicators and monitoring them on a much more frequent basis than if we're not particularly at risk.

What we're trying to do is put together work that's already very much under way. There are real life examples of organizations - like John Streufert's work at the Department of State and the work that the National Institute of Standards and Technology is doing around developing templates and revising its 800-53 document - we hope will then result in reducing risk and using the scarce resources we already have to make federal systems much more trustworthy and much less vulnerable.

CHABROW: This could be all done without new legislation, correct? That's the intent of this paper?

REEDER: Our premise was exactly that. It's a follow-up on to the work that Karen Evans, Jim Lewis, Alan Paller and I did in the fall, which assumed that while Congress continues to deliberate in its fashion, there are things that the executive already has ample authority to do.

Differing from FISMA

CHABROW: How will this approach differ from the guidance that's provided through FISMA, the Federal Information Security Management Act?

KAREN EVANS: What happens on an annual basis is that the Office of Management and Budget sends out a memo, tells the CIOs these are the things that we're looking at, this is what we want you to report upon, and then part of that guidance also goes specifically to the inspector general which talks about the evaluations that they're required to cover and what they're supposed to do under the requirements of FISMA.

[With] this particular report, what we're asking for and what we're recommending to OMB is that they do away with the top-level types of numbers that they're asking for from the inspector generals and really focus on some basic types of evaluations that need to be done, and change that dynamic. For example, as we conducted our research, what happens [is they] asked for just a static set of numbers. That's exactly what they ask for. How many systems have you looked at? How many systems are high moderate - moderate, high and low? How many are run by contractors? How many are managed by government? How many have their privacy impact assessments posted, records, all those things like that?

It's all about the numbers and there is a lot of resources that are spent trying to generate the numbers, but are those the right measurements to really reduce the security risk and to deal with the current threat that's facing the agencies? In this and the first part of what Frank talked about, we're really emphasizing this idea of the secure baseline. Everybody has to start from the same place and that the IGs need to base-line where the agency is against this target. If the agency isn't there, then that becomes a challenge onto itself.

The other thing is that IGs measure one way and words mean one thing to the inspector generals. When those recommendations come back to the CIO, they interpret them differently. The other part of this is trying to get them to use a standardized approach so that the data means the data and that you're actually having a discussion about the findings and you can prioritize those findings to fix them in a way that will reduce the risk for the agency. What you want is the biggest bang for the buck.

The easiest way to explain this is if you run a FISMA evaluation today and if an IG runs it today, there really is no prioritization of the findings. They can go to the NIST publications and they can run an evaluation, as long as they generate the numbers that OMB talks about. That means an agency could end up with 500 findings. Then, a CIO could look at that and say, "I can fix 495 of those findings with the budget I have." [They] go through, fix them all, and it looks like you only have five left, except those five could be the five that are actually 90 percent of your problem. They're the hardest ones to do. If you just look at the numbers, it looks really good. If you look at the risk and the data, it's really bad. We're trying to change that dynamic of what's being measured.

Compliance Issues

CHABROW: One complaint I've heard from time to time from CISOs and CIOs about audits, whether IG or GAO, is that they're dealing more with the compliance issue and they're not necessarily looking at the cybersecurity. You can or cannot be compliant, but at the same time there are other things that these CISOs and CIOs say that they're doing to protect their agency's IT. Would this help deal with that problem?

REEDER: We're hearing exactly the same thing and that's precisely what this framework is intended to address. When you talk to the IGs, some will say essentially they have no discretion. As Karen said, there are 500 items that they have to measure compliance with and they don't feel empowered to exercise judgment about which is important and which isn't. The idea here is to get that question out there first so that indeed, and using Karen's example, the five that actually matter get fixed and we move away from what we would call "cybersecurity by checklist."

Changes in Reporting

CHABROW: Agencies would still have to report regularly under the FISMA law. That doesn't change, but would any reporting be changed and does the White House OMB have the authority to make those changes?

EVANS: Absolutely. The other thing is you're required by statutes to report to Congress on an annual basis on the status of what's going on, and this is the part that Frank was talking about. What DHS is doing is the continuous monitoring and the diagnostic capability. That's really trying to get real-time feedback based on your environment today: here are the threats, here's where you are and here are the mitigating actions that you should be taking in accordance with that. What you want to do then also is get IGs up-to-par so that in essence there's continuous evaluation. If I see what the threat is, I take a mitigating strategy and I implement it, then you need the evaluation to be constant as well so that you can ensure that the actions that you're taking and the processes that you have in place are adequate demands so there's constant evaluation.

One of the things the paper does go into is the independent consequences of the policy, and this is a bone that I always bring up. The OMB policy says that at a minimum of three years you have to do certain things, or the minute that the threat or a significant change has happened in your environment. When the policy was written, that still is an adequate statement for today. It's the implementation where people have said, "I only have to do it every three years." They forget the "or" part, or if the threat has changed or the risk has increased.

Now, if you look at what the constant environment is, the operating environment, the threat is ongoing. That's how they came up with advanced persistent threat. If the threat is ongoing on a daily basis, it means you have to do an evaluation and mitigation on a daily basis, if not hourly, if not automated. That's what everybody meant from continuous monitoring. But you have to have a method in order to evaluate the way that you're doing that is adequate, is mature, is addressing the risk and that it's happening on that basis. What you want to do is change that dialogue and get the IGs into this constant evaluation mode the same way that the agencies are talking about continuous monitoring mode.

Getting Recommendations Adopted

CHABROW: What are you doing to try to get the government to adopt the recommendations you have?

REEDER: We didn't develop this report in the dark. While we don't assert that everybody that we talked to agrees with us, it was based on consultations with some of the people who are responsible for issuing the guidance as well as other smart folks. We're hopeful that the guidance that OMB issues this spring will at least reflect consideration of the recommendations we've made. The good news is that Karen and I are private citizens who have First Amendment rights. The bad news is all we have is a very small bully pulpit so we hope to be constructive voices supporting efforts by the people who have the authority to make this happen. We'll continue to try to find examples and I think there are many folks who are doing it and getting it right, the advantages a more risk-based framework actually provides.

EVANS: In this particular case, what we also did was go to the National Academy of Public Administration, which is out there, is congressionally mandated, and has senior fellows with great, deep experience in multiple areas dealing with public administration. In this particular case, [with] the initial report, there was a panel that was convened and Earl Devaney chaired the panel for us. Earl Devaney was an Inspector General at the Department of the Interior and then also ran the recovery board, which is all about a lot of audits and evaluations. We took the report to them to get their opinion from that public-administration viewpoint about - what needs to be done, the validity of the arguments, did we analyze the problem appropriately, especially since we're focusing on how to change what's happening between the chief information security officers and the inspector generals so that they can compliment and get to the same outcome, versus seeming to be arguing over things all the time. That's not productive in helping the agency achieve what they need. Their review, their findings and their letter about the report is on the top as well, and there's a lot that the academy can do to help educate and reach out to the inspector generals about the concepts in here so that when - and I'm going to be optimistic - the Office of Management and Budget embraces some of the concepts that are in here, the IGs will be ready to take that hand-off and go forward and implement that.

Short Timeframe

CHABROW: You're talking about a short timeframe, aren't you? Part of your recommendations is a FISMA evaluation plan to OMB from the inspector generals no later than May?

EVANS: Everybody looks at that and thinks, "Wow. That's really short." It's actually in order to affect 2014 guidance. One of the big things that the NAPA panel of senior fellows talked about was workforce issues with the CIOs, as well as with the inspector generals, so that was their recommendation about making sure that you get this because implementation is going to be critical. You want to get implementation right. You want to be methodical about the implementation and you want to make sure that IGs have the capabilities to do it. If they don't, what are the gaps with the IGs so that OMB can really talk to them about what's the best way to move forward? Or, do you phase it with a group of agencies and a group of IGs that have the capability? That's why you want it in May because the 2014 guidance is under way right now, since a lot of IGs are already starting their evaluations in order to gather the data of what they think OMB is going to ask for. If OMB changes what they ask for, and you've already done your evaluation, now you've used two sets of resources. IGs have limited resources as well, so you want to get ahead so that you can affect their resource planning as well.

CHABROW: If there is to be some kind of FISMA reform legislation enacted this year, would these things be incorporated into that?

REEDER: Clearly, yes. We think there's ample authority in OMB's discretion in issuing FISMA guidance to adopt most of what we're suggesting. Clearly reinforcing it in FISMA reform legislation would send a very strong signal.

Change Is Needed

CHABROW: Any other thoughts?

REEDER: We're proposing a fairly ambitious cycle. We deliberately pushed this out when we did in hopes of influencing the guidance for 2014. We don't expect the world to turn on a dime, but it's got to start turning soon or we'll continue to do things that don't make us more secure, and that's ultimately what this is about.

EVANS: There's nothing in this document - and I hope that this is how it would be taken - that would derail anything, for example, that the Department of Homeland Security is doing with their continuous monitoring. This is really trying to leverage it. The paper tries to stress that you have to have a secure baseline in place to go forward before you do anything that's in here. That's the key to success. You have to really get that part of it evaluated and agreed upon so that you can then measure agencies' progress against that baseline.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing govinfosecurity.com, you agree to our use of cookies.