Everyone in the technology industry processing credit card data is familiar with PCI DSS and the associated process to get attested on a quarterly basis from a QSA ( Qualified Security Assessor ). In addition to the quarterly attestation , organizations also have to go through audits both internal and external audits to maintain PCI compliance on a annual basis.
PCI DSS requires organizations to maintain basic security hygiene which means having necessary secure development lifecycle processes, training and awareness programs etc. One of the requirements is also to ensure your PCI environment is secured against known vulnerabilities. This is a on-going effort and requires organizations to remain vigilant and knowledgeable about vulnerabilities and have a comprehensive vulnerability management program.
PCI DSS requirements 6.1, 6.2 and 6.3 are relevant in that regard.
Current Approach and Challenges
Most organizations rely on some kind of vulnerability scanning solution for vulnerability detection (in some cases more than a single tool to give coverage across layers from compute to networks to storage). These scanners can initiate a scheduled scan on a periodic basis to highlight vulnerabilities that need to be addressed. PCI DSS requires all vulnerabilities that have a CVSS version 2.0 score of 4 and above to be addressed for PCI compliance.
This continuous scanning approach has outlived its shelf life and organizations are increasing looking at this as an exercise purely for compliance and little to do with actual risk. Even looking at this purely from a compliance angle, leaves a lot to be desired. Here’s why,
- NVD ( National Vulnerability Database ) isn’t the greatest source when it comes to getting the latest vulnerability information. You can read more over here as to why.
- To detect vulnerability impact, scanning solutions rely on adding test cases for the individual vulnerabilities ( CVE identifiers ). Prioritization and heavy reliance on NVD to provide necessary details means a significant lag in detection of late breaking vulnerabilities.
- The pace at which vulnerabilities get added , the time to add new test cases and ultimately the cadence of the scan schedule , create a situation where at no point in time can a single scan can be trusted to paint the latest picture from compliance standpoint.
- Very often vulnerabilities have relevance across vendor solutions and its beyond the scope of a single host or network scanner to detect this across different layers of the stack that touches the PCI data. Take for example, CVE-2018-3646 ( Intel L1 Terminal Fault vulnerability as we all know it ) which prompted all major os vendors, networking device manufacturers, virtualization software vendors as well as cloud providers. Even a slight change of configuration can unmask vulnerabilities that have not surfaced in the past as newer systems and components come under PCI scope.
- Another aspect especially for larger environments is that it takes a very long time to complete a host/network scan and provides little value as environments are scaled up for performance / redundancy and are largely homogenous in their makeup.
The net effect of all of this is ineffective planning for mitigations , delayed assessment of impact resulting in either non-compliance or taking the exception process to sail through the attestation. The scenario of the very final scan being initiated for AOC ( Attestation of Compliance ) certification surfacing new vulnerabilities and surprising everyone is not a very uncommon scenario. This is something that should be avoidable.
Taking a fresh look
- A fresh approach towards looking at PCI compliance is needed that is continuous in nature and automates some of the repetitive aspects ( namely detection ). Automation is very often associated with scheduled tasks / jobs, however automation is only valuable and is as good as the intelligence it uses for detection. Using intelligence that is stale doesn’t yield desired results and is often a non-productive exercise.
- Organizations need to start using latest vulnerability intelligence not just for their overall production and corporate vulnerability management programs but also for compliance. Compliance is also risk driven ( even PCI DSS suggests taking a risk driven approach ) and there should not be any reason to look at compliance functions differently.
- Assets and services have the physical element when it comes to applying patching and mitigations, however to get over the need of keeping up with long running scans, what’s needed is an ability to represent you PCI environment that can be analyzed offline, keeping the relationships with physical assets intact.
- With containerization and cloud adoption for PCI, knowing your cloud inventory , tracking that against latest intelligence and any associated patches made available by cloud providers becomes critical.
- Getting to know about patches as soon as they are published , testing and applying them saves precious cycles and will keep you a step ahead. This applies not just for operating systems or the application stack but also your cloud images ( eg. Amazon EC2 images ).
All of this will go a long way to ensure you don’t end up in a mode of perpetual scanning, do it minimally and most importantly use it for verification of applied mitigations and not vulnerability detection / identification.
Learn more by reaching out to us here or over email , info@threatwatch.io