Digital Defense Incorporated’s Vulnerability Management Maturity Model

Digital Defense Incorporated released a white paper on their vulnerability management maturity model, which they dub VM3, back in 2015. The white paper is pretty robust, detailing vulnerability management steps as a discipline and then diving into maturity levels and what they mean. However, I’m focusing on the maturity levels rather than their whole breakdown, which is a good write-up on its own.


There are six levels in the model, starting with Level 0. They also define a level not on the scale entitled “Vulnerable To A Breach”, which i find to be more than a little incorrect. I do get what they were trying to convey, Level 0 is entitled “Importance Acknowledged” and it entails exactly what you would think from the name. The graphic is mostly trying to show that prior to acknowledging the importance of vulnerability management as a whole, the risk involved is considerable and unknown. By taking that first step, and organization can begin the process of vulnerability and risk management, but they are certainly still vulnerable to a breach. A poor choice of wording I think, especially considering that one of the purposes of these models is for security professionals to use them in order to explain a way ahead to their leadership, and it gives the wrong impression.

Level 1 is entitled “Primitive Operations”, and it is at this stage that the organization adds in scanning on an ad-hoc basis. The key detail from this level is that the organization is unable to meet compliance objectives. Without the processes in place to prioritize findings and integrate remediation or mitigation, the program is weak.

When an organization is in Level 2, “Purpose Driven Compliance”, we begin to see the automation of tasks like scanning and even scheduled assessments. Other aspects like trending and the beginnings of remediation prioritization are also found here, but that prioritization is immature and shows how at this point the security team is not integrated with other groups within their organization. An important point is that at this level the organization is able to use these fledgling processes to actually achieve compliance.

Priority begins to play a larger role in Level 3, “Proactive Execution.” This level sees the scanning begin to become more advanced, more frequent, and the remediation more integrated with the organization’s business practices. The white paper makes a point that at this level we are still not talking about executive level buy-in, which is extremely important for the future growth of the program to have this buy-in.

At Level 4, “Committed Lifecycle Management”, we see it come together. The executives are on board, remediation efforts are fully integrated and operate on a timeline, and prioritization is a fundamental part of the program. This is also where we begin to see automation playing a role, with automated scans and automated patching.

The last level, Level 5, is entitled “Automated Security Ecosytem”. The idea of this level is taking Level 4, which is a fairly complete system, and adding further automation to make it as seamless as possible. Multiple scans are done from variable vantage points in order to leverage the maximum possible system information, and that data is incorporated into the system for analysis.

So there are a few problems with this one. I already spoke about the pre-level 0 issue, just a disagreement on the correct grammar. Another problem I have is that credentialed scanning doesn’t show up until Level 3, but in Level 2 the assumption is that compliance has been achieved. I don’t see how any organization can realistically be compliant with whatever compliance metric they are responsible to without implementing at least some of the technical measures talked about in Level 3. This lack of technical maturity at the stage when you should be achieving compliance really hurts the model, in my opinion. Aside from those minor issues, the model really seems to capture the evolution of a program.

Next week, I’ll compare these three models against each other. I’ve signed up to give a talk on this at NovaHackers next month so, there it is. My first talk with the group, although I’ve been on the list for a while they have meetings on Mondays and I’ve had classes on Mondays since forever. But I decided to only look at 3, because more than that and they start to get very obscure and also there is so much repetition I don’t think I need to go over any more.

Core Security’s Vulnerability Management Maturity Model

Core Security’s model is more robust and detailed than the previous one. They drafted this model based on client issues they had in the past, and modeled it based on the evolution of a security program. They’ve built in milestones and more steps to help organizations define where they are on the roadmap, although are quick to say that this does not imply that all organizations should ideally be at level five of their model.

The model is broken into six total levels, starting from level zero. The levels are further grouped into three pairings that describe the overall status of these levels, and significant indicators are given at the boundary between these groups as to what crossing into the next group means. Let’s deal with the three groups first. They are:

  • Blissful Ignorance
  • Awareness and Early Maturity
  • Business Risk and Context

Blissful Ignorance is the first two levels, where an organization does not have the full scope of threats to the enterprise, or possibly even the scope of the enterprise itself. The boundary that crosses from this group to the next is titled Peak Data Overload, and is meant to describe the problem organizations have of implementing tools that provide information without having the tools to put that data in context and glean insight from it. Once into the Awareness and Early Maturity group, that context, gained through new tools and processes, builds and allows the organization to be effective with their data more and more. The boundary that crosses into the next group from here is named Effective Prioritization, which really just describes the implementation of risk management within the enterprise using the given data. The last group is Business Risk and Context, and it is at this point that we’re truly talking about a mature program that is not just addressing risks, but incorporating the true business impact of those risks.


This layout makes it easy to navigate, is quite broad strokes, what the true meaning of the specific levels is going to be without delving deeply into any of them. You know immediately based on the group where an organization is with data analysis and can probably make some educated guesses about the status of their tool implementations. But the groups are further divided into level, which are:

  • Level 0: Non-Existent
  • Level 1: Scanning
  • Level 2: Assessment and Compliance
  • Level 3: Analysis and Prioritization
  • Level 4: Attack Management
  • Level 5: Business-Risk Management

Level 0 is exactly what it sounds like. No program, minimal controls, no mitigation strategies. Level 1 introduces scanning on some level and some amount of mitigation based on those scans, but no consistent plan for either. Level 2 is where the program actually starts to coalesce, with scheduled scanning driven specifically by some sort of compliance framework and a plan for mitigation. Level 3 begins to get into real risk management, beginning the process of prioritization and trending. Level 4 shifts the focus, assumes that the processes for scans and patching have become mature enough to handle that switch in focus, and starts looking at actual threats and attackers. The last step, Level 5, integrates with business processes and looks very much like the continuous monitoring cycle that is often talked about.

This model has a lot of great information in it, and overall I like how it is organized. It is easy to read and most people can look at this and without much analysis guess where they are going to fall in the model, and probably be correct. I like that Core Security is very blunt on the state of most programs, stating that they will mostly fall somewhere between levels 1 and 2, and that they offer several specific measures to help organizations grow within the context of the model. Core Security focuses on operational context in their proposed solutions, extending vulnerability management into other tools and security realms so that it is truly integrated into the business.

On the negative side, the model is not a fit for every organization. It is more specific, which is good for implementation but that specificity can limit the ability of an organization to grow within the model. Especially smaller organizations will have trouble adopting some of the prescribed measures that would be quite costly or resource intensive. Core Security does point out that the objective for every organization will not necessarily be to progress continuously to the end as a “goal”, but that undercuts some aspects of the usefulness of the model as well.

Overall, Core Security’s model is a very strong and very detailed one. The fact that I can immediately look at it and tell where my organization is and where our higher headquarters is, that is pretty amazing for any model. Core Security focuses on the problem that most people have, which is too much data and not knowing what to do with it. Next week, assuming I’m not late again, I will be looking at the VM3 model from Digital Defense Incorporated.


Related links:

The Threat and Vulnerability Management Maturity Model

Verizon DBIR 2016

I’m a few days late in posting this, I like to get it done on Monday. Maybe I’ll move that to Fridays from here on out, but I still intend to do it weekly. It has been helpful so far, I feel like I can talk confidently about the things I have posted on.

So Verizon released their Data Breach Investigation Report for 2016, to much controversy. Of concern to me, of course, is the Vulnerability section, which has been the source of the most controversy. It was a product of Kenna Security, spearheaded by Michael Roytman, a well known data scientist. So this thing has some credentials, and it is pitched specifically as being actionable for vulnerability management personnel.

Really this whole thing frames another semi-popular topic over the week, which was impostor syndrome. Ben Hughes wrote a blog post about this, and it is something I have struggled with for a while. In a field with so many talented people it is easy to forget that the most visible people are also the most extremely talented in the pool, and that even they have weaknesses or strengths. It is important to look at yourself with perspective, but also to look at others with perspective. If you see an amazing presentation at a conference, you’re seeing a point on a timeline that started years and years ago. It should be inspiring rather than making you feel like you’re not a part of the group. Which, you know, easier said than done.

But the DBIR situation really demonstrates to me how important this is to overcome. Verizon is Verizon, Roytman is very intelligent, and Kenna has done some great work. In the face of this it is tempting to look at their data points on vulnerabilities and assume that any issues you have with it are due to lack of experience or lack of data. It is important to understand the flaws when you see them. The sample is limited, even if it is a large sample. The results were not pruned, leaving many DoS results in the top vulnerabilities that just do not lend value. The methodology was disclosed, yet there are vulnerabilities on the list that could not possibly meet the requirements. The results themselves didn’t make sense, with specific vulnerabilities being called out that probably have never been exploited at all, let alone enough to demand a spot on a list such as this.

The value in this report is in the applicability. And the problem with the report, at least the vulnerability section, is that it has no application. I cannot in good conscience take these results and advise my engineers to prioritize remediation of the FREAK SSL vulnerabilities over newly released Microsoft patches, as Roytman suggests. Big picture stuff is great, but at the end of the day I have a network to help protect and following this advice would undermine those efforts. It is important to be critical, not disrespectful of the individual or the work, but still skeptical. And this is a point where you have to overcome the impostor syndrome paradigm and understand that you don’t have to be a renaissance hacker to realize that these results are sorely lacking in operational perspective. I can’t apply these, I can’t really glean much from them aside from making some assumptions about the data set they came from. This is an important lesson to keep in mind. No matter the source, be skeptical.

Rob Graham posted a synopsis just yesterday of how they came about this. He has an IDS background and breaks it down pretty simply. But the bottom line is that, as I said above and he said in his own blog post, the data is not actionable. This is the most research I’ve ever done into an industry report, and to see it so full of holes is distressing.

Last week of school, last final is today (*it was Monday, I am late) then I can refocus on more pertinent stuff. Next week I will write about Core Security’s vulnerability management maturity model I planned to do that this week but this whole DBIR thing seemed to butt right up against the other discussion about impostor syndrome, and this is an important thing to think about. I hope to keep this blog up and look back years from now at how much I have changed in this respect.


Vulnerability Management Maturity Models – Trip Wire


Vulnerability management models are something I have been interested in since hearing an episode of Security Weekly a couple of months ago in which William Olsen from Tenable discussed the concept. I like models, not just because I’m lazy, but because I like standards. It gives you a point of reference which you can use to gauge your own processes, which is something you can’t really get independently. The purpose of a model should be to provide that reference and to give guidance on how to improve your own processes.

So for my first weekly exercise I’m going to kick off something I have been meaning to do since I first heard that podcast: analysis and comparison of different publicly available vulnerability management models. The first up is from Trip Wire, an overview given by Irfahn Khimji in a white paper and a subsequent article, linked at the end.

Rather than draft something out of whole cloth, Trip Wire has based this model on the Capability Maturity Model (CMM) from the DoD. This model is often seen linked to software development programs, developed at a time when software development was becoming a larger part of systems engineering within the DoD. It does describe some general processes which can be applied to programs outside of software development. This choice to adopt an existing, and proven, model makes sense. As in the CMM, Trip Wire’s model runs from level 1 to level 5. A general breakdown of the steps is:

  1. Initial – a chaotic process with scanning managed by outside entities and the bare minimum being done to meet compliance standards
  2. Managed – Scanning is done in house and processes are beginning to be defined for identification and resolution of vulnerabilities
  3. Defined – Processes are well-defined and management supports those processes
  4. Quantitatively Managed – Processes are enhanced by specific goals and requirements
  5. Optimizing – Continuous process improvement


Immediately one positive aspect of this model jumps out, which is that it is independent of any technology or vendor. That kind of feature is not necessary or attractive when looking at models like these. The model focuses on process improvement rather than on implementation of technologies or controls, with the definition of improvement left largely to the organization to define. An organization is made to analyze their own processes rather than rely on a checklist, but it also does not offer specific suggestions for organizations who are lost on where to go to proceed. This should come fro the organization’s own analysis but could be incorporated into a model in a way that does not dictate a static path to the group.

Further analysis of the steps:

Initial – The key characteristic of organizations in this step is a lack of control over the process. Because organizations do not have defined processes, they are outsourcing capabilities to a provider who does scans for them. There are no goals in this stage other than meeting minimum requirements applied to the organization by whatever industry or governmental standard they adhere to.

Managed – In order to move out of this disorganized state in the Initial stage, processes must be defined. An organization cannot establish effective processes that move the program forward without taking control of the scanning in order to incorporate this into internal processes. This stage also focuses on uncredentialed scanning in order to give a view from outside. This is a step up from the Initial stage only in that the organization is taking control of the processes, aside from that it is still pretty dismal. No goals or requirements are established outside of meeting minimum standards, the process is recieving no or minimal support from management and there is limited follow-through on data generated by the scans.

Defined – At this stage the program actually begins to coalesce into a useful tool. Processes become well-defined, meaning there are requirements and scanning/remediation are becoming integrated into maintenance routines. Most importantly, management buy in has already occurred at this stage. This means they see the value in the program, which only happens when the vulnerability management team is working well with system administrators to accomplish the required remediations with minimal impact to operations. Trip Wire states that most organizations fall somewhere between the Managed and Defined stages, and that maps closely with what other organizations say about their models.

Quantitatively Managed – This is where we set goals for the program. Scoring is developed and implemented, with thresholds done in a way that makes sense and properly takes risk into account. This requires a full understanding of your environment and the capabilities you have available. This is also the stage where the organization is really equipped to make risk management decisions. The full scope of the available controls are considered when making scoring decisions to determine the true risk of a vulnerability.

Optimizing – This is the last step, but it’s not accurate to think of it like that. This phase is really just continuing maintenance of the program. The program must be evaluated based on the metrics developed earlier in the process for continuous improvement. Continuous improvement is the key from this stage. The idea is to use the metrics to set realistic goals and timeframes for achieving those goals. Once achieved, reevaluate and then assess based on the revised metrics from the changes.


This model outlines in broad strokes how to take the vulnerability management program from a barely functioning afterthought into a well developed program adding value to the organization. It’s vague at points, but that is not necessarily a negative.  It provides a framework that any organization can apply to themselves with some work and analysis. Depending on the organization, that lack of specificity could be a negative. In cases where an organization needs more specific guidance, this model would be suitable as a partial solution, used either on top of another model or with vendor/industry guidance. The model is standardized in order to make it’s reach as wide as possible and follows a model known to work in software development.

I plan to go over some different models and then compare them, then this analysis will become more useful.