Vulnerability Management Maturity Models: Analyzed

I looked around for the model that was originally shown on Security Weekly, but I was never able to find it. That’s unfortunate, because it looked useful. They never specifically mention it but I think their model was base off the Gartner maturity model for a endpoint security, which looks like this:

Screen-Shot-2015-09-30-at-1.06.47-PM

That was blatantly stolen from Tripwire (click the picture for the article) who lifted it from a SANS presentation, and it pretty clearly shows that it isn’t all improvements. Things level off, or can even begin to degrade, as you gain more information about your environment. More information means that you have to begin to really manage the information rather than just react to scan data, or whatever the data may be. It’s a key point that these models address in different ways. And there is no “best” model. Each organization must choose a model suited for them, and them change that model up, customize it. These are just a platform to start from in building a way ahead for your own organization.

My favorite model that I reviewed was the Core Security model. That graphic alone is a winner, I can take that graphic and give it to my boss and he will immediately understand what it means. I can tell him where we are on the curve, and it will make sense. More importantly, our challenges will make sense and there is a clear way ahead. It addresses the problem of information overload directly and gives clear indications of how to move past that to effectively manage your environment. When I first read about this idea of a model to gauge your vulnerability management program, this is pretty much what I was hoping to find.

That doesn’t mean it is a complete solution, but as I said, nothing is. The key is adapting this model to your organization in such a way that it keeps the key features without losing what makes it effective. That is a project I’m working on now. I won’t (can’t, they would never let me) share the results but it wouldn’t matter to anyone else I don’t think. My results will be specific to my own experience and to making the gains that I need to make within the constraints that I know I have. Each organization can, and should, look at themselves through the lens of one of these models and see what they can change.

I’m giving a talk about this next week at NovaHackers. It will be my first ever talk there (or meeting attended) and I will try not to bore everyone to death with this, but I actually enjoy writing and thinking about it. It’s easy to nerd out over charts and graphs so I will keep it simple. Taking the CEH on Wednesday, not worried about passing it, only if I fail then I’ll  be a laughing stock and it’s 700 bucks down the drain. No pressure. Next week… I am not sure what I’m going to write about.

Digital Defense Incorporated’s Vulnerability Management Maturity Model

Digital Defense Incorporated released a white paper on their vulnerability management maturity model, which they dub VM3, back in 2015. The white paper is pretty robust, detailing vulnerability management steps as a discipline and then diving into maturity levels and what they mean. However, I’m focusing on the maturity levels rather than their whole breakdown, which is a good write-up on its own.

vm3_levels

There are six levels in the model, starting with Level 0. They also define a level not on the scale entitled “Vulnerable To A Breach”, which i find to be more than a little incorrect. I do get what they were trying to convey, Level 0 is entitled “Importance Acknowledged” and it entails exactly what you would think from the name. The graphic is mostly trying to show that prior to acknowledging the importance of vulnerability management as a whole, the risk involved is considerable and unknown. By taking that first step, and organization can begin the process of vulnerability and risk management, but they are certainly still vulnerable to a breach. A poor choice of wording I think, especially considering that one of the purposes of these models is for security professionals to use them in order to explain a way ahead to their leadership, and it gives the wrong impression.

Level 1 is entitled “Primitive Operations”, and it is at this stage that the organization adds in scanning on an ad-hoc basis. The key detail from this level is that the organization is unable to meet compliance objectives. Without the processes in place to prioritize findings and integrate remediation or mitigation, the program is weak.

When an organization is in Level 2, “Purpose Driven Compliance”, we begin to see the automation of tasks like scanning and even scheduled assessments. Other aspects like trending and the beginnings of remediation prioritization are also found here, but that prioritization is immature and shows how at this point the security team is not integrated with other groups within their organization. An important point is that at this level the organization is able to use these fledgling processes to actually achieve compliance.

Priority begins to play a larger role in Level 3, “Proactive Execution.” This level sees the scanning begin to become more advanced, more frequent, and the remediation more integrated with the organization’s business practices. The white paper makes a point that at this level we are still not talking about executive level buy-in, which is extremely important for the future growth of the program to have this buy-in.

At Level 4, “Committed Lifecycle Management”, we see it come together. The executives are on board, remediation efforts are fully integrated and operate on a timeline, and prioritization is a fundamental part of the program. This is also where we begin to see automation playing a role, with automated scans and automated patching.

The last level, Level 5, is entitled “Automated Security Ecosytem”. The idea of this level is taking Level 4, which is a fairly complete system, and adding further automation to make it as seamless as possible. Multiple scans are done from variable vantage points in order to leverage the maximum possible system information, and that data is incorporated into the system for analysis.

So there are a few problems with this one. I already spoke about the pre-level 0 issue, just a disagreement on the correct grammar. Another problem I have is that credentialed scanning doesn’t show up until Level 3, but in Level 2 the assumption is that compliance has been achieved. I don’t see how any organization can realistically be compliant with whatever compliance metric they are responsible to without implementing at least some of the technical measures talked about in Level 3. This lack of technical maturity at the stage when you should be achieving compliance really hurts the model, in my opinion. Aside from those minor issues, the model really seems to capture the evolution of a program.

Next week, I’ll compare these three models against each other. I’ve signed up to give a talk on this at NovaHackers next month so, there it is. My first talk with the group, although I’ve been on the list for a while they have meetings on Mondays and I’ve had classes on Mondays since forever. But I decided to only look at 3, because more than that and they start to get very obscure and also there is so much repetition I don’t think I need to go over any more.

Core Security’s Vulnerability Management Maturity Model

Core Security’s model is more robust and detailed than the previous one. They drafted this model based on client issues they had in the past, and modeled it based on the evolution of a security program. They’ve built in milestones and more steps to help organizations define where they are on the roadmap, although are quick to say that this does not imply that all organizations should ideally be at level five of their model.

The model is broken into six total levels, starting from level zero. The levels are further grouped into three pairings that describe the overall status of these levels, and significant indicators are given at the boundary between these groups as to what crossing into the next group means. Let’s deal with the three groups first. They are:

  • Blissful Ignorance
  • Awareness and Early Maturity
  • Business Risk and Context

Blissful Ignorance is the first two levels, where an organization does not have the full scope of threats to the enterprise, or possibly even the scope of the enterprise itself. The boundary that crosses from this group to the next is titled Peak Data Overload, and is meant to describe the problem organizations have of implementing tools that provide information without having the tools to put that data in context and glean insight from it. Once into the Awareness and Early Maturity group, that context, gained through new tools and processes, builds and allows the organization to be effective with their data more and more. The boundary that crosses into the next group from here is named Effective Prioritization, which really just describes the implementation of risk management within the enterprise using the given data. The last group is Business Risk and Context, and it is at this point that we’re truly talking about a mature program that is not just addressing risks, but incorporating the true business impact of those risks.

 

This layout makes it easy to navigate, is quite broad strokes, what the true meaning of the specific levels is going to be without delving deeply into any of them. You know immediately based on the group where an organization is with data analysis and can probably make some educated guesses about the status of their tool implementations. But the groups are further divided into level, which are:

  • Level 0: Non-Existent
  • Level 1: Scanning
  • Level 2: Assessment and Compliance
  • Level 3: Analysis and Prioritization
  • Level 4: Attack Management
  • Level 5: Business-Risk Management

Level 0 is exactly what it sounds like. No program, minimal controls, no mitigation strategies. Level 1 introduces scanning on some level and some amount of mitigation based on those scans, but no consistent plan for either. Level 2 is where the program actually starts to coalesce, with scheduled scanning driven specifically by some sort of compliance framework and a plan for mitigation. Level 3 begins to get into real risk management, beginning the process of prioritization and trending. Level 4 shifts the focus, assumes that the processes for scans and patching have become mature enough to handle that switch in focus, and starts looking at actual threats and attackers. The last step, Level 5, integrates with business processes and looks very much like the continuous monitoring cycle that is often talked about.

This model has a lot of great information in it, and overall I like how it is organized. It is easy to read and most people can look at this and without much analysis guess where they are going to fall in the model, and probably be correct. I like that Core Security is very blunt on the state of most programs, stating that they will mostly fall somewhere between levels 1 and 2, and that they offer several specific measures to help organizations grow within the context of the model. Core Security focuses on operational context in their proposed solutions, extending vulnerability management into other tools and security realms so that it is truly integrated into the business.

On the negative side, the model is not a fit for every organization. It is more specific, which is good for implementation but that specificity can limit the ability of an organization to grow within the model. Especially smaller organizations will have trouble adopting some of the prescribed measures that would be quite costly or resource intensive. Core Security does point out that the objective for every organization will not necessarily be to progress continuously to the end as a “goal”, but that undercuts some aspects of the usefulness of the model as well.

Overall, Core Security’s model is a very strong and very detailed one. The fact that I can immediately look at it and tell where my organization is and where our higher headquarters is, that is pretty amazing for any model. Core Security focuses on the problem that most people have, which is too much data and not knowing what to do with it. Next week, assuming I’m not late again, I will be looking at the VM3 model from Digital Defense Incorporated.

 

Related links:

The Threat and Vulnerability Management Maturity Model

https://www.coresecurity.com/system/files/attachments/vulnerability-management-maturity-model-white-paper_0.pdf

Vulnerability Management Maturity Models – Trip Wire

 

Vulnerability management models are something I have been interested in since hearing an episode of Security Weekly a couple of months ago in which William Olsen from Tenable discussed the concept. I like models, not just because I’m lazy, but because I like standards. It gives you a point of reference which you can use to gauge your own processes, which is something you can’t really get independently. The purpose of a model should be to provide that reference and to give guidance on how to improve your own processes.

So for my first weekly exercise I’m going to kick off something I have been meaning to do since I first heard that podcast: analysis and comparison of different publicly available vulnerability management models. The first up is from Trip Wire, an overview given by Irfahn Khimji in a white paper and a subsequent article, linked at the end.

Rather than draft something out of whole cloth, Trip Wire has based this model on the Capability Maturity Model (CMM) from the DoD. This model is often seen linked to software development programs, developed at a time when software development was becoming a larger part of systems engineering within the DoD. It does describe some general processes which can be applied to programs outside of software development. This choice to adopt an existing, and proven, model makes sense. As in the CMM, Trip Wire’s model runs from level 1 to level 5. A general breakdown of the steps is:

  1. Initial – a chaotic process with scanning managed by outside entities and the bare minimum being done to meet compliance standards
  2. Managed – Scanning is done in house and processes are beginning to be defined for identification and resolution of vulnerabilities
  3. Defined – Processes are well-defined and management supports those processes
  4. Quantitatively Managed – Processes are enhanced by specific goals and requirements
  5. Optimizing – Continuous process improvement

cmmi-staged

Immediately one positive aspect of this model jumps out, which is that it is independent of any technology or vendor. That kind of feature is not necessary or attractive when looking at models like these. The model focuses on process improvement rather than on implementation of technologies or controls, with the definition of improvement left largely to the organization to define. An organization is made to analyze their own processes rather than rely on a checklist, but it also does not offer specific suggestions for organizations who are lost on where to go to proceed. This should come fro the organization’s own analysis but could be incorporated into a model in a way that does not dictate a static path to the group.

Further analysis of the steps:

Initial – The key characteristic of organizations in this step is a lack of control over the process. Because organizations do not have defined processes, they are outsourcing capabilities to a provider who does scans for them. There are no goals in this stage other than meeting minimum requirements applied to the organization by whatever industry or governmental standard they adhere to.

Managed – In order to move out of this disorganized state in the Initial stage, processes must be defined. An organization cannot establish effective processes that move the program forward without taking control of the scanning in order to incorporate this into internal processes. This stage also focuses on uncredentialed scanning in order to give a view from outside. This is a step up from the Initial stage only in that the organization is taking control of the processes, aside from that it is still pretty dismal. No goals or requirements are established outside of meeting minimum standards, the process is recieving no or minimal support from management and there is limited follow-through on data generated by the scans.

Defined – At this stage the program actually begins to coalesce into a useful tool. Processes become well-defined, meaning there are requirements and scanning/remediation are becoming integrated into maintenance routines. Most importantly, management buy in has already occurred at this stage. This means they see the value in the program, which only happens when the vulnerability management team is working well with system administrators to accomplish the required remediations with minimal impact to operations. Trip Wire states that most organizations fall somewhere between the Managed and Defined stages, and that maps closely with what other organizations say about their models.

Quantitatively Managed – This is where we set goals for the program. Scoring is developed and implemented, with thresholds done in a way that makes sense and properly takes risk into account. This requires a full understanding of your environment and the capabilities you have available. This is also the stage where the organization is really equipped to make risk management decisions. The full scope of the available controls are considered when making scoring decisions to determine the true risk of a vulnerability.

Optimizing – This is the last step, but it’s not accurate to think of it like that. This phase is really just continuing maintenance of the program. The program must be evaluated based on the metrics developed earlier in the process for continuous improvement. Continuous improvement is the key from this stage. The idea is to use the metrics to set realistic goals and timeframes for achieving those goals. Once achieved, reevaluate and then assess based on the revised metrics from the changes.

Conclusion

This model outlines in broad strokes how to take the vulnerability management program from a barely functioning afterthought into a well developed program adding value to the organization. It’s vague at points, but that is not necessarily a negative.  It provides a framework that any organization can apply to themselves with some work and analysis. Depending on the organization, that lack of specificity could be a negative. In cases where an organization needs more specific guidance, this model would be suitable as a partial solution, used either on top of another model or with vendor/industry guidance. The model is standardized in order to make it’s reach as wide as possible and follows a model known to work in software development.

I plan to go over some different models and then compare them, then this analysis will become more useful.

References:

http://www.infosecurityeurope.com/__novadocuments/153767?v=635875243235970000
http://www.tripwire.com/state-of-security/vulnerability-management/the-five-stages-of-vulnerability-management/