The National Vulnerability Database (NVD) is considered to be the single source of truth for all software / hardware vulnerabilities that are in the public domain. However a few things get overlooked which makes it far less desirable to be relied upon when it comes to designing your vulnerability management program.
Lets look at them.
- Its up-to the discretion of the vendor or the one who discovered the vulnerability to report it / reserve a CVE identifier that can then be tracked and updated.
- In overwhelming number of cases the vulnerability information is already public before any data is available about those vulnerabilities in NVD. As an example, lets take a look at last week’s vulnerability reports across different vendor and you will notice that there were close to 50 NEW vulnerabilities which were rated medium severity or above by different public forums including vendor bulletins, RSS feeds, blogs and yet NVD did not have any data available ( marked as reserved ). ThreatWatch was able to infer a rating for each one of them using a ML model specifically trained for this objective.If you attempt to lookup information for any of these CVE identifiers as of today, you will see the following error message on the NVD website.
- Affected products which is another crucial piece of data which determines relevance of a vulnerability to an organization, gets updated over a period of time. In some cases it takes days to weeks while in other cases it can take months to years. eg, CVE-2018-0592 , it took a couple of months before products got updated in NVD.
- Traditional scanning solutions rely heavily on NVD to add test cases to their solutions which are then pushed into the tool. The factors mentioned above along with prioritization that scanning vendors need to do when adding new test cases, increases the detection time drastically and isn’t comprehensive.
- Lastly, there are hundreds of structured and unstructured sources where vulnerabilities get disclosed, discussed and reported. Its impossible to track each one of them in-house or via a vendor that relies on curating this information manually in near real time. What is needed is a machine driven zero touch approach that not only discovers them , but normalizes and carries out impact assessment in real time or near real time.
So the question to be asked is, what should a vulnerability management program focus on ?
- Its important to look at NVD ( CVE identifiers ) , however that will never be cutting edge in terms of time and whats needed is to have an ability to look at relationships of a vulnerability with different vendor advisories,bulletins,blogs, tweets etc and in many cases related vulnerabilities. Looking at these relationships will provide the most *relevant* and *current* information to take a risk based approach towards planning a mitigation.
- Mitigations ( i.e patches and workarounds ) are always published by vendors, researchers and the direction of building your vulnerability intelligence should always flow from a multitude of different sources back to NVD and not the other way around.
- Host and application scanning has outlived its use and what is needed is a continuous monitoring of vulnerabilities across structured and unstructured sources, tracked against your inventory. Automation in these two area’s will serve as the bedrock for securing your organizational exposure.
- Lastly, de-centralizing vulnerability intelligence at the level of actual impacts to services and assets across different verticals is the only way to have a sustainable and scalable vulnerability management program. Organizations need to realize the diversity that exists in the nature of software and hardware assets and different stakeholders who manage them, doesn’t matter if its your corporate or production environments, on-prem or cloud hosted.
Reach out to ThreatWatch and we can provide the building blocks necessary to shape up your very own vulnerability management program.