Setting up a Pentesting Lab

As I mentioned previously, I am going to start  working through Georgia Weidman’s book, Penetration Testing, as a sort of primer on penetration testing. The first step in the process is to build a lab. Once my school account opens up and I can access all of that sweet free VMWare software I will be building out an ESXi server with FreeNAS storage and migrating all of this to that server, but for now I am using VMWare Workstation and running these on the Toshiba laptop mentioned in my last post. It works, even if I am anxious to build out the real home system I want.

All of these instructions are assuming VMWare Workstation 12 and x64 Kali environment. This took me about two weeks to do  and then go back and redo for documentation, working on it an hour or two per day. A motivated person could do it in a day I am sure. I spent a lot of time experimenting and trying to get different things to work, such as a Windows 7 x64 build working with SQL Express.

Kali can be downloaded as a pre-built VM from https://www.offensive-security.com/kali-linux-vmware-virtualbox-image-download/ and imported into VMWare Workstation. This is a very simple process. Before powering the VM on, go into the CPU settings and change the processor to Intel VT-x/EPT or AMD-V/RVI, which will be necessary in order to run Android emulators:

cpu

Once in, change the password for the root account and create a user.

useradd -m xxxxx
usermod -a -G sudo xxxxx

Next, perform a system update using:

apt-get install update
apt-get install upgrade

Installing Nessus is a very easy process. Navigate to https://www.tenable.com/products/nessus-home and register for the code. The code will be emailed to you, and you can download the software. Once the .deb file is downloaded you install it using dpkg-i and follow the configuration instructions.

This is where the modern versions of software and Kali start to diverge from the book. The mingw-64 compiler is already loaded into Kali and should have been updated in the previous step. Download Hyperion 1.2 from the following link: http://nullsecurity.net/tools/binary.html. Unzip it and use the following command to compile it:

i686-w64-mingw32-c++ Hyperion-1.2/Src/Crypter/*.cpp -o hyperion.exe

Veil Evasion set up is by the book and simple, but it will take quite a bit of time. Once that is complete, make the Ettercap config changes detailed in the book. Then it is time to move on to the Android SDK. First to make some changes required by the SDK to run the phone emulators properly. Run the following command to add libraries required by the SDK:

sudo apt-get install lib32z1 lib32ncurses5 lib32stdc++6

Then two environment variables must be set. The first tells the SDK to use Kali’s libraries, installed in the previous step. The second tells the SDK what the SDK root directory is. Add the following two lines to /etc/environment:

ANDROID_EMULATOR_USE_SYSTEM_LIBS=1
ANDROID_SDK_ROOT=/root/Android/Sdk

Once those have been added, add a script to the /etc/profile.d directory that exports the two environment variables:

export ANDROID_EMULATOR_USE_SYSTEM_LIBS=1
export ANDROID_SDK_ROOT=/root/Android/Sdk

Download the Android SDK for Linux at : https://developer.android.com/studio/index.html. Unzip it and then navigate to the bin directory within the unzipped files and run the studio.sh script. That should start the Android Studio software. Prior to creating the emulated smartphones, download the associated packages with each smartphone image. You find those by opening the SDK Manager within Android Studio and selecting the “Show All Packages” button. Once selected, you can view supporting packages for the images. Select the packages for download that support the Android versions mentioned in the book.

sdk_mgr

Once these downloads are complete, navigate to the AVD Manager utility with Android Studio and create a new smartphone image for each image listed in the book, being sure to select the correct version of Android.

avd

There is an issue in my version of Android Studio in which ARM emulated smartphones must have their config files manually pointed to the correct image. The config files are located in a default installation at /root/.android/avd and there should be a separate directory for each smartphone created in the SDK Manager. Within each directory, navigate to the config.ini file and note the image.sysdir.l path. The smartphones will be listed by API version, below is the config.ini entry for API 8:

image_loc

This points to an unknown directory in the default installation. To correct this, change the image.sysdir.l path to point to the relative path of the installed image for the smartphone. For the newer API 7 and 8 versions, this is located in the platform directory, at $INSTALL_DIR/platforms/android-X/images, as seen below:

fixed_loc

The image for the API 18 emulator is located in $INSTALL_DIR/system-images/android-18/default/armeabi-v7a after installation, as seen below:

18_loc

You should now be able to run the emulators from the SDK Manager window. When running ARM emulators in an x86 framework, expect to receive the following warning:

error

Building a Windows XP machine can be tricky. I tried to build one from disc but had issues with the VMWare SCSI driver. The driver is available Here: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1005208.  I tried pre-loading the driver but was unable to get this to work and in the interest of saving time I went another route. Test VMs for Windows XP are still available from Microsoft, although they do not publish the link. The link is located here: http://www.askvg.com/download-free-windows-xp-vista-and-windows-7-vhd-image-files-for-microsoft-virtual-pc/. After extracting the .vhd file, follow the steps at http://alstechtips.blogspot.com/2013/11/how-to-migrate-vhd-to-vmware-workstation.html in order to import the .vhd file for use. After successful import, log into the server and install the network drivers located at https://downloadcenter.intel.com/download/18717.

For software associated with the book, download Firefox first. IE 8 will be unsupported on the websites needed to download the software detailed in the book. The software associated with Windows XP installs according to the book description with the exception of mona, which is now located at https://github.com/corelan/mona/ instead of the link given in the book.

The Ubuntu VM can be downloaded via the torrent link given in the book. The book provides the password for unpacking the files, and importing the VM did not have any issues.

Building a Windows 7 VM is significantly easier than the Windows XP VM was. There are no driver issues with the stock Windows 7 SP1 x86 build, so you can install from disc or you can use this link to find a Windows 7 test VM and follow the directions to import it to VMWare. Once installed, again download Firefox to access the software needed to follow along with the book, since IE 8 will not be able to. Note that if you try and use a Windows 7 SP1 x64 build, the version of SQLExpress in the torrent package will not install correctly. There is an x64 version available from Microsoft, but I did not have much luck getting SP3 to install correctly even with the x64 package. Rather than spend more time trying to get this to work under an x64 platform, I moved forward with an x86 platform and it worked without a hitch.

References:

Moving Forward – Setting up a Pentest Lab

So, having solved all other problems, I want to learn more about the offensive side of security. The best way to do that, that I can see, is to really get a good lab going and work through some material. So here’s my new goal: a year from now I want to take the OSCP. I’m giving myself a year because there’s no ticking clock, and I want to be thorough and learn the material and this gives me time to learn on my own and to get involved in at least 2, possibly 3 CTFs between now and then with Bsides DC, Baltimore, and Shmoocon all coming up.

Step 1: identify material. There’s some official OSCP materials available at the usual places. But that’s no good, you want to pay for that. And besides, you want to be able to interact with the instructor and other students. And yet now is not a good time to take the official material due to a new school semester starting soon (incident handling and “big data” classes, should be fun). Georgia Weidman’s book on pentesting, cunningly titled Penetration Testing, gets great reviews from people in the industry and after going through the first couple of chapters it seems on point. So, I’m going with this to start. Also going to work my way through Black Hat Python by Justin Seitz finally, improve and focus my coding skills. So I’m going to use this blog to track progress through this material and figure out where to go next.

Step 2: make a lab. I am cheap, and am determined to make a lab as cheap as possible while still having as much potential as I need. I made it through nearly two years of college in an IT program using only an Acer C-720 Chromebook that I picked up for $150 bucks back in the day. I am confident I can make this work. So I am taking two approaches. First, is my laptop that I replaced that chromebook with, a Toshiba Satellite C-55 that I picked up last fall for about $400. That laptop plus a quick memory upgrade to 16 GB has been pretty formidable. More than enough to run a few low budget VMs, and probably to run through some basic offensive lessons.

But, of course, I want more. So a year or so ago I picked up a 1U Dell 1900 Poweredge server from eBay. It’s an older server, definitely not up to modern standards. But it also cost $90. It came with 16 GB RAM, but I was able to get that up to 32 GB with a total cost of about $24. The goal with this is to wait until GMU activates my Dreamspark account again this fall, download the free ESXi software available from there, and configure and run multiple VMs from there and run through scenarios remotely when possible.

So that leaves me with the following:

  • 2.2 GHz Intel Core i5 laptop with 16 GB RAM
  • Dual 2 GHz Intel Xeon server with 32 GB RAM
  • still rockin the chromebook

Total cost of all of this comes out to about $650, but considering the only thing I actually purchased for this initiative was the memory for the server I had sitting in a closet, I think so far so good.

I’ve set up the initial Kali VM from the Weidman pentesting book on the laptop, but since the book is a bit older there are some things that don’t quite fit with the new version of Kali and probably just the passage of time. I’ll get through them as I come to any problems.

So far that’s the only thing I have had time to do though because in the past month life has interfered. I gave my first talk at NovaHackers, it wasn’t great because I was nervous and stepped on what I had planned to say, but whatever. It was nice to meet people and see the great talks. Tomorrow is another meeting. I’ve learned Python, using Python Crash Course by Eric Matthes, which is a good teaching tool. I’m transitioning to a new job over the course of the next few weeks. I’m even thinking up new blog ideas and possibly even talks. I want to do one possibly on Nessus API, that could be something that is useful. We will see. Passed the CEH, I hate to say things are easy but really, it is, how they charge that much money for it I have no idea. Still hoping to do this blog ever week, even if I did fall behind for a month.

Vulnerability Management Maturity Models: Analyzed

I looked around for the model that was originally shown on Security Weekly, but I was never able to find it. That’s unfortunate, because it looked useful. They never specifically mention it but I think their model was base off the Gartner maturity model for a endpoint security, which looks like this:

Screen-Shot-2015-09-30-at-1.06.47-PM

That was blatantly stolen from Tripwire (click the picture for the article) who lifted it from a SANS presentation, and it pretty clearly shows that it isn’t all improvements. Things level off, or can even begin to degrade, as you gain more information about your environment. More information means that you have to begin to really manage the information rather than just react to scan data, or whatever the data may be. It’s a key point that these models address in different ways. And there is no “best” model. Each organization must choose a model suited for them, and them change that model up, customize it. These are just a platform to start from in building a way ahead for your own organization.

My favorite model that I reviewed was the Core Security model. That graphic alone is a winner, I can take that graphic and give it to my boss and he will immediately understand what it means. I can tell him where we are on the curve, and it will make sense. More importantly, our challenges will make sense and there is a clear way ahead. It addresses the problem of information overload directly and gives clear indications of how to move past that to effectively manage your environment. When I first read about this idea of a model to gauge your vulnerability management program, this is pretty much what I was hoping to find.

That doesn’t mean it is a complete solution, but as I said, nothing is. The key is adapting this model to your organization in such a way that it keeps the key features without losing what makes it effective. That is a project I’m working on now. I won’t (can’t, they would never let me) share the results but it wouldn’t matter to anyone else I don’t think. My results will be specific to my own experience and to making the gains that I need to make within the constraints that I know I have. Each organization can, and should, look at themselves through the lens of one of these models and see what they can change.

I’m giving a talk about this next week at NovaHackers. It will be my first ever talk there (or meeting attended) and I will try not to bore everyone to death with this, but I actually enjoy writing and thinking about it. It’s easy to nerd out over charts and graphs so I will keep it simple. Taking the CEH on Wednesday, not worried about passing it, only if I fail then I’ll  be a laughing stock and it’s 700 bucks down the drain. No pressure. Next week… I am not sure what I’m going to write about.

Digital Defense Incorporated’s Vulnerability Management Maturity Model

Digital Defense Incorporated released a white paper on their vulnerability management maturity model, which they dub VM3, back in 2015. The white paper is pretty robust, detailing vulnerability management steps as a discipline and then diving into maturity levels and what they mean. However, I’m focusing on the maturity levels rather than their whole breakdown, which is a good write-up on its own.

vm3_levels

There are six levels in the model, starting with Level 0. They also define a level not on the scale entitled “Vulnerable To A Breach”, which i find to be more than a little incorrect. I do get what they were trying to convey, Level 0 is entitled “Importance Acknowledged” and it entails exactly what you would think from the name. The graphic is mostly trying to show that prior to acknowledging the importance of vulnerability management as a whole, the risk involved is considerable and unknown. By taking that first step, and organization can begin the process of vulnerability and risk management, but they are certainly still vulnerable to a breach. A poor choice of wording I think, especially considering that one of the purposes of these models is for security professionals to use them in order to explain a way ahead to their leadership, and it gives the wrong impression.

Level 1 is entitled “Primitive Operations”, and it is at this stage that the organization adds in scanning on an ad-hoc basis. The key detail from this level is that the organization is unable to meet compliance objectives. Without the processes in place to prioritize findings and integrate remediation or mitigation, the program is weak.

When an organization is in Level 2, “Purpose Driven Compliance”, we begin to see the automation of tasks like scanning and even scheduled assessments. Other aspects like trending and the beginnings of remediation prioritization are also found here, but that prioritization is immature and shows how at this point the security team is not integrated with other groups within their organization. An important point is that at this level the organization is able to use these fledgling processes to actually achieve compliance.

Priority begins to play a larger role in Level 3, “Proactive Execution.” This level sees the scanning begin to become more advanced, more frequent, and the remediation more integrated with the organization’s business practices. The white paper makes a point that at this level we are still not talking about executive level buy-in, which is extremely important for the future growth of the program to have this buy-in.

At Level 4, “Committed Lifecycle Management”, we see it come together. The executives are on board, remediation efforts are fully integrated and operate on a timeline, and prioritization is a fundamental part of the program. This is also where we begin to see automation playing a role, with automated scans and automated patching.

The last level, Level 5, is entitled “Automated Security Ecosytem”. The idea of this level is taking Level 4, which is a fairly complete system, and adding further automation to make it as seamless as possible. Multiple scans are done from variable vantage points in order to leverage the maximum possible system information, and that data is incorporated into the system for analysis.

So there are a few problems with this one. I already spoke about the pre-level 0 issue, just a disagreement on the correct grammar. Another problem I have is that credentialed scanning doesn’t show up until Level 3, but in Level 2 the assumption is that compliance has been achieved. I don’t see how any organization can realistically be compliant with whatever compliance metric they are responsible to without implementing at least some of the technical measures talked about in Level 3. This lack of technical maturity at the stage when you should be achieving compliance really hurts the model, in my opinion. Aside from those minor issues, the model really seems to capture the evolution of a program.

Next week, I’ll compare these three models against each other. I’ve signed up to give a talk on this at NovaHackers next month so, there it is. My first talk with the group, although I’ve been on the list for a while they have meetings on Mondays and I’ve had classes on Mondays since forever. But I decided to only look at 3, because more than that and they start to get very obscure and also there is so much repetition I don’t think I need to go over any more.

Core Security’s Vulnerability Management Maturity Model

Core Security’s model is more robust and detailed than the previous one. They drafted this model based on client issues they had in the past, and modeled it based on the evolution of a security program. They’ve built in milestones and more steps to help organizations define where they are on the roadmap, although are quick to say that this does not imply that all organizations should ideally be at level five of their model.

The model is broken into six total levels, starting from level zero. The levels are further grouped into three pairings that describe the overall status of these levels, and significant indicators are given at the boundary between these groups as to what crossing into the next group means. Let’s deal with the three groups first. They are:

  • Blissful Ignorance
  • Awareness and Early Maturity
  • Business Risk and Context

Blissful Ignorance is the first two levels, where an organization does not have the full scope of threats to the enterprise, or possibly even the scope of the enterprise itself. The boundary that crosses from this group to the next is titled Peak Data Overload, and is meant to describe the problem organizations have of implementing tools that provide information without having the tools to put that data in context and glean insight from it. Once into the Awareness and Early Maturity group, that context, gained through new tools and processes, builds and allows the organization to be effective with their data more and more. The boundary that crosses into the next group from here is named Effective Prioritization, which really just describes the implementation of risk management within the enterprise using the given data. The last group is Business Risk and Context, and it is at this point that we’re truly talking about a mature program that is not just addressing risks, but incorporating the true business impact of those risks.

 

This layout makes it easy to navigate, is quite broad strokes, what the true meaning of the specific levels is going to be without delving deeply into any of them. You know immediately based on the group where an organization is with data analysis and can probably make some educated guesses about the status of their tool implementations. But the groups are further divided into level, which are:

  • Level 0: Non-Existent
  • Level 1: Scanning
  • Level 2: Assessment and Compliance
  • Level 3: Analysis and Prioritization
  • Level 4: Attack Management
  • Level 5: Business-Risk Management

Level 0 is exactly what it sounds like. No program, minimal controls, no mitigation strategies. Level 1 introduces scanning on some level and some amount of mitigation based on those scans, but no consistent plan for either. Level 2 is where the program actually starts to coalesce, with scheduled scanning driven specifically by some sort of compliance framework and a plan for mitigation. Level 3 begins to get into real risk management, beginning the process of prioritization and trending. Level 4 shifts the focus, assumes that the processes for scans and patching have become mature enough to handle that switch in focus, and starts looking at actual threats and attackers. The last step, Level 5, integrates with business processes and looks very much like the continuous monitoring cycle that is often talked about.

This model has a lot of great information in it, and overall I like how it is organized. It is easy to read and most people can look at this and without much analysis guess where they are going to fall in the model, and probably be correct. I like that Core Security is very blunt on the state of most programs, stating that they will mostly fall somewhere between levels 1 and 2, and that they offer several specific measures to help organizations grow within the context of the model. Core Security focuses on operational context in their proposed solutions, extending vulnerability management into other tools and security realms so that it is truly integrated into the business.

On the negative side, the model is not a fit for every organization. It is more specific, which is good for implementation but that specificity can limit the ability of an organization to grow within the model. Especially smaller organizations will have trouble adopting some of the prescribed measures that would be quite costly or resource intensive. Core Security does point out that the objective for every organization will not necessarily be to progress continuously to the end as a “goal”, but that undercuts some aspects of the usefulness of the model as well.

Overall, Core Security’s model is a very strong and very detailed one. The fact that I can immediately look at it and tell where my organization is and where our higher headquarters is, that is pretty amazing for any model. Core Security focuses on the problem that most people have, which is too much data and not knowing what to do with it. Next week, assuming I’m not late again, I will be looking at the VM3 model from Digital Defense Incorporated.

 

Related links:

The Threat and Vulnerability Management Maturity Model

https://www.coresecurity.com/system/files/attachments/vulnerability-management-maturity-model-white-paper_0.pdf

Verizon DBIR 2016

I’m a few days late in posting this, I like to get it done on Monday. Maybe I’ll move that to Fridays from here on out, but I still intend to do it weekly. It has been helpful so far, I feel like I can talk confidently about the things I have posted on.

So Verizon released their Data Breach Investigation Report for 2016, to much controversy. Of concern to me, of course, is the Vulnerability section, which has been the source of the most controversy. It was a product of Kenna Security, spearheaded by Michael Roytman, a well known data scientist. So this thing has some credentials, and it is pitched specifically as being actionable for vulnerability management personnel.

Really this whole thing frames another semi-popular topic over the week, which was impostor syndrome. Ben Hughes wrote a blog post about this, and it is something I have struggled with for a while. In a field with so many talented people it is easy to forget that the most visible people are also the most extremely talented in the pool, and that even they have weaknesses or strengths. It is important to look at yourself with perspective, but also to look at others with perspective. If you see an amazing presentation at a conference, you’re seeing a point on a timeline that started years and years ago. It should be inspiring rather than making you feel like you’re not a part of the group. Which, you know, easier said than done.

But the DBIR situation really demonstrates to me how important this is to overcome. Verizon is Verizon, Roytman is very intelligent, and Kenna has done some great work. In the face of this it is tempting to look at their data points on vulnerabilities and assume that any issues you have with it are due to lack of experience or lack of data. It is important to understand the flaws when you see them. The sample is limited, even if it is a large sample. The results were not pruned, leaving many DoS results in the top vulnerabilities that just do not lend value. The methodology was disclosed, yet there are vulnerabilities on the list that could not possibly meet the requirements. The results themselves didn’t make sense, with specific vulnerabilities being called out that probably have never been exploited at all, let alone enough to demand a spot on a list such as this.

The value in this report is in the applicability. And the problem with the report, at least the vulnerability section, is that it has no application. I cannot in good conscience take these results and advise my engineers to prioritize remediation of the FREAK SSL vulnerabilities over newly released Microsoft patches, as Roytman suggests. Big picture stuff is great, but at the end of the day I have a network to help protect and following this advice would undermine those efforts. It is important to be critical, not disrespectful of the individual or the work, but still skeptical. And this is a point where you have to overcome the impostor syndrome paradigm and understand that you don’t have to be a renaissance hacker to realize that these results are sorely lacking in operational perspective. I can’t apply these, I can’t really glean much from them aside from making some assumptions about the data set they came from. This is an important lesson to keep in mind. No matter the source, be skeptical.

Rob Graham posted a synopsis just yesterday of how they came about this. He has an IDS background and breaks it down pretty simply. But the bottom line is that, as I said above and he said in his own blog post, the data is not actionable. This is the most research I’ve ever done into an industry report, and to see it so full of holes is distressing.

Last week of school, last final is today (*it was Monday, I am late) then I can refocus on more pertinent stuff. Next week I will write about Core Security’s vulnerability management maturity model I planned to do that this week but this whole DBIR thing seemed to butt right up against the other discussion about impostor syndrome, and this is an important thing to think about. I hope to keep this blog up and look back years from now at how much I have changed in this respect.

 

Vulnerability Management Maturity Models – Trip Wire

 

Vulnerability management models are something I have been interested in since hearing an episode of Security Weekly a couple of months ago in which William Olsen from Tenable discussed the concept. I like models, not just because I’m lazy, but because I like standards. It gives you a point of reference which you can use to gauge your own processes, which is something you can’t really get independently. The purpose of a model should be to provide that reference and to give guidance on how to improve your own processes.

So for my first weekly exercise I’m going to kick off something I have been meaning to do since I first heard that podcast: analysis and comparison of different publicly available vulnerability management models. The first up is from Trip Wire, an overview given by Irfahn Khimji in a white paper and a subsequent article, linked at the end.

Rather than draft something out of whole cloth, Trip Wire has based this model on the Capability Maturity Model (CMM) from the DoD. This model is often seen linked to software development programs, developed at a time when software development was becoming a larger part of systems engineering within the DoD. It does describe some general processes which can be applied to programs outside of software development. This choice to adopt an existing, and proven, model makes sense. As in the CMM, Trip Wire’s model runs from level 1 to level 5. A general breakdown of the steps is:

  1. Initial – a chaotic process with scanning managed by outside entities and the bare minimum being done to meet compliance standards
  2. Managed – Scanning is done in house and processes are beginning to be defined for identification and resolution of vulnerabilities
  3. Defined – Processes are well-defined and management supports those processes
  4. Quantitatively Managed – Processes are enhanced by specific goals and requirements
  5. Optimizing – Continuous process improvement

cmmi-staged

Immediately one positive aspect of this model jumps out, which is that it is independent of any technology or vendor. That kind of feature is not necessary or attractive when looking at models like these. The model focuses on process improvement rather than on implementation of technologies or controls, with the definition of improvement left largely to the organization to define. An organization is made to analyze their own processes rather than rely on a checklist, but it also does not offer specific suggestions for organizations who are lost on where to go to proceed. This should come fro the organization’s own analysis but could be incorporated into a model in a way that does not dictate a static path to the group.

Further analysis of the steps:

Initial – The key characteristic of organizations in this step is a lack of control over the process. Because organizations do not have defined processes, they are outsourcing capabilities to a provider who does scans for them. There are no goals in this stage other than meeting minimum requirements applied to the organization by whatever industry or governmental standard they adhere to.

Managed – In order to move out of this disorganized state in the Initial stage, processes must be defined. An organization cannot establish effective processes that move the program forward without taking control of the scanning in order to incorporate this into internal processes. This stage also focuses on uncredentialed scanning in order to give a view from outside. This is a step up from the Initial stage only in that the organization is taking control of the processes, aside from that it is still pretty dismal. No goals or requirements are established outside of meeting minimum standards, the process is recieving no or minimal support from management and there is limited follow-through on data generated by the scans.

Defined – At this stage the program actually begins to coalesce into a useful tool. Processes become well-defined, meaning there are requirements and scanning/remediation are becoming integrated into maintenance routines. Most importantly, management buy in has already occurred at this stage. This means they see the value in the program, which only happens when the vulnerability management team is working well with system administrators to accomplish the required remediations with minimal impact to operations. Trip Wire states that most organizations fall somewhere between the Managed and Defined stages, and that maps closely with what other organizations say about their models.

Quantitatively Managed – This is where we set goals for the program. Scoring is developed and implemented, with thresholds done in a way that makes sense and properly takes risk into account. This requires a full understanding of your environment and the capabilities you have available. This is also the stage where the organization is really equipped to make risk management decisions. The full scope of the available controls are considered when making scoring decisions to determine the true risk of a vulnerability.

Optimizing – This is the last step, but it’s not accurate to think of it like that. This phase is really just continuing maintenance of the program. The program must be evaluated based on the metrics developed earlier in the process for continuous improvement. Continuous improvement is the key from this stage. The idea is to use the metrics to set realistic goals and timeframes for achieving those goals. Once achieved, reevaluate and then assess based on the revised metrics from the changes.

Conclusion

This model outlines in broad strokes how to take the vulnerability management program from a barely functioning afterthought into a well developed program adding value to the organization. It’s vague at points, but that is not necessarily a negative.  It provides a framework that any organization can apply to themselves with some work and analysis. Depending on the organization, that lack of specificity could be a negative. In cases where an organization needs more specific guidance, this model would be suitable as a partial solution, used either on top of another model or with vendor/industry guidance. The model is standardized in order to make it’s reach as wide as possible and follows a model known to work in software development.

I plan to go over some different models and then compare them, then this analysis will become more useful.

References:

http://www.infosecurityeurope.com/__novadocuments/153767?v=635875243235970000
http://www.tripwire.com/state-of-security/vulnerability-management/the-five-stages-of-vulnerability-management/

Alright then

Well I guess I will be doing this then. Micah Hoffman gave a talk at BSides Charm about how to be more involved, at https://twitter.com/WebBreacher/status/724340389323395072 . It was like he was talking about me.I go to the conferences, I do the talks, I talk to a few people but mostly keep to myself, but that’s not helping me any.

So I have thought about doing a blog for a while and it always felt pretentious, but the way Micah described it made it seem like exactly what I need. It is a way to keep publicly available notes for myself and track my own progress. So this is it. I’ll commit to doing this once per week, hopefully mostly focused on infosec and what I’m doing, but maybe different stuff as well. It may not always be an epic post, but every week it will be at least one thing.

This week I am setting up the blog itself. Anyone who has ever set up WordPress knows that is not really any work at all. This iss a difficult time with school and family. School is winding down, which is of course when activity ramps up. I’m working on my MS at GMU and in the next couple of weeks I have 3 finals, a presentation to give on SCADA security (or lack thereof) and an enormous paper to deliver on PKI. Specifically on smartcards, which is kind of interesting and at least I understand them much better than I did before, which was not at all.

School is necessary and knowledge is always great, but I can’t wait to be done.I would much rather spend this time on other things. I have a list internally of what I need to do this year, might as well make it public here:

  1. Pass the C|EH exam, not hard but expensive so I need to study up (May)
  2. Actually participate in NOVAHackers rather than just read the list (June)
  3. Volunteer for BSides DC (October)
  4. Evolve our vulnerability management program at work (ongoing)
  5. Become stronger on security tools I work with (ongoing)
  6. Develop more specific goals (ongoing)
  7. Complete MS coursework (May 2017)

So bullets 4 and 5 are not very specific, thus bullet 6. Bullet 7, there’s nothing I can do to speed that up so I may as well use the time I have well. In the immediate future I am focused on what I need to do to finish this semester of school. Once that is complete, the plan is to knock out the certification (which I already payed for) and to refocus on the vulnerability management program implementation. It’s exciting to have so much ahead!