Everyone in the technology industry processing credit card data is familiar with PCI DSS and the associated process to get attested on a quarterly basis from a QSA ( Qualified Security Assessor ). In addition to the quarterly attestation , organizations also have to go through audits both internal and external audits to maintain PCI compliance on a annual basis.

PCI DSS requires organizations to maintain basic security hygiene which means having necessary secure development lifecycle processes, training and awareness programs etc. One of the requirements is also to ensure your PCI environment is secured against known vulnerabilities. This is a on-going effort and requires organizations to remain vigilant and knowledgeable about vulnerabilities and have a comprehensive vulnerability management program.

PCI DSS requirements 6.1, 6.2 and 6.3 are relevant in that regard.

Current Approach and Challenges

Most organizations rely on some kind of vulnerability scanning solution for vulnerability detection (in some cases more than a single tool to give coverage across layers from compute to networks to storage). These scanners can initiate a scheduled scan on a periodic basis to highlight vulnerabilities that need to be addressed. PCI DSS requires all vulnerabilities that have a CVSS version 2.0 score of 4 and above to be addressed for PCI compliance.

This continuous scanning approach has outlived its shelf life and organizations are increasing looking at this as an exercise purely for compliance and little to do with actual risk. Even looking at this purely from a compliance angle, leaves a lot to be desired. Here’s why,

  1. NVD ( National Vulnerability Database ) isn’t the greatest source when it comes to getting the latest vulnerability information. You can read more over here as to why.
  2. To detect vulnerability impact, scanning solutions rely on adding test cases for the individual vulnerabilities ( CVE identifiers ). Prioritization and heavy reliance on NVD to provide necessary details means a significant lag in detection of late breaking vulnerabilities.
  3. The pace at which vulnerabilities get added , the time to add new test cases and ultimately the cadence of the scan schedule , create a situation where at no point in time can a single scan can be trusted to paint the latest picture from compliance standpoint.
  4. Very often vulnerabilities have relevance across vendor solutions and its beyond the scope of a single host or network scanner to detect this across different layers of the stack that touches the PCI data. Take for example, CVE-2018-3646 ( Intel L1 Terminal Fault vulnerability as we all know it ) which prompted all major os vendors, networking device manufacturers, virtualization software vendors as well as cloud providers. Even a slight change of configuration can unmask vulnerabilities that have not surfaced in the past as newer systems and components come under PCI scope.
  5. Another aspect especially for larger environments is that it takes a very long time to complete a host/network scan and provides little value as environments are scaled up for performance / redundancy and are largely homogenous in their makeup.

The net effect of all of this is ineffective planning for mitigations , delayed assessment of impact resulting in either non-compliance or taking the exception process to sail through the attestation. The scenario of the very final scan being initiated for AOC ( Attestation of Compliance ) certification surfacing new vulnerabilities and surprising everyone is not a very uncommon scenario. This is something that should be avoidable.

Taking a fresh look

All of this will go a long way to ensure you don’t end up in a mode of perpetual scanning, do it minimally and most importantly use it  for verification of applied mitigations and not vulnerability detection / identification.

Learn more by reaching out to us here or over email , info@threatwatch.io

Leave a Reply

Your email address will not be published. Required fields are marked *