Vulnerability management models are something I have been interested in since hearing an episode of Security Weekly a couple of months ago in which William Olsen from Tenable discussed the concept. I like models, not just because I’m lazy, but because I like standards. It gives you a point of reference which you can use to gauge your own processes, which is something you can’t really get independently. The purpose of a model should be to provide that reference and to give guidance on how to improve your own processes.
So for my first weekly exercise I’m going to kick off something I have been meaning to do since I first heard that podcast: analysis and comparison of different publicly available vulnerability management models. The first up is from Trip Wire, an overview given by Irfahn Khimji in a white paper and a subsequent article, linked at the end.
Rather than draft something out of whole cloth, Trip Wire has based this model on the Capability Maturity Model (CMM) from the DoD. This model is often seen linked to software development programs, developed at a time when software development was becoming a larger part of systems engineering within the DoD. It does describe some general processes which can be applied to programs outside of software development. This choice to adopt an existing, and proven, model makes sense. As in the CMM, Trip Wire’s model runs from level 1 to level 5. A general breakdown of the steps is:
- Initial – a chaotic process with scanning managed by outside entities and the bare minimum being done to meet compliance standards
- Managed – Scanning is done in house and processes are beginning to be defined for identification and resolution of vulnerabilities
- Defined – Processes are well-defined and management supports those processes
- Quantitatively Managed – Processes are enhanced by specific goals and requirements
- Optimizing – Continuous process improvement
Immediately one positive aspect of this model jumps out, which is that it is independent of any technology or vendor. That kind of feature is not necessary or attractive when looking at models like these. The model focuses on process improvement rather than on implementation of technologies or controls, with the definition of improvement left largely to the organization to define. An organization is made to analyze their own processes rather than rely on a checklist, but it also does not offer specific suggestions for organizations who are lost on where to go to proceed. This should come fro the organization’s own analysis but could be incorporated into a model in a way that does not dictate a static path to the group.
Further analysis of the steps:
Initial – The key characteristic of organizations in this step is a lack of control over the process. Because organizations do not have defined processes, they are outsourcing capabilities to a provider who does scans for them. There are no goals in this stage other than meeting minimum requirements applied to the organization by whatever industry or governmental standard they adhere to.
Managed – In order to move out of this disorganized state in the Initial stage, processes must be defined. An organization cannot establish effective processes that move the program forward without taking control of the scanning in order to incorporate this into internal processes. This stage also focuses on uncredentialed scanning in order to give a view from outside. This is a step up from the Initial stage only in that the organization is taking control of the processes, aside from that it is still pretty dismal. No goals or requirements are established outside of meeting minimum standards, the process is recieving no or minimal support from management and there is limited follow-through on data generated by the scans.
Defined – At this stage the program actually begins to coalesce into a useful tool. Processes become well-defined, meaning there are requirements and scanning/remediation are becoming integrated into maintenance routines. Most importantly, management buy in has already occurred at this stage. This means they see the value in the program, which only happens when the vulnerability management team is working well with system administrators to accomplish the required remediations with minimal impact to operations. Trip Wire states that most organizations fall somewhere between the Managed and Defined stages, and that maps closely with what other organizations say about their models.
Quantitatively Managed – This is where we set goals for the program. Scoring is developed and implemented, with thresholds done in a way that makes sense and properly takes risk into account. This requires a full understanding of your environment and the capabilities you have available. This is also the stage where the organization is really equipped to make risk management decisions. The full scope of the available controls are considered when making scoring decisions to determine the true risk of a vulnerability.
Optimizing – This is the last step, but it’s not accurate to think of it like that. This phase is really just continuing maintenance of the program. The program must be evaluated based on the metrics developed earlier in the process for continuous improvement. Continuous improvement is the key from this stage. The idea is to use the metrics to set realistic goals and timeframes for achieving those goals. Once achieved, reevaluate and then assess based on the revised metrics from the changes.
This model outlines in broad strokes how to take the vulnerability management program from a barely functioning afterthought into a well developed program adding value to the organization. It’s vague at points, but that is not necessarily a negative. It provides a framework that any organization can apply to themselves with some work and analysis. Depending on the organization, that lack of specificity could be a negative. In cases where an organization needs more specific guidance, this model would be suitable as a partial solution, used either on top of another model or with vendor/industry guidance. The model is standardized in order to make it’s reach as wide as possible and follows a model known to work in software development.
I plan to go over some different models and then compare them, then this analysis will become more useful.