Many repositories can potentially include the same code, which should not be scanned again (in theory). Is there a benefit of caching results of PHPCS scans for individual files?
Example: Using PHPCS to scan itself using the PHPCompatibilityWP standard took 47 seconds (on a VM, Intel i7). Only around 800 PHP files. Reading the results from a file for each file scanned would have taken a lot less than that.
Caching key (a hash) could make use of the following factors, combined:
PHPCS standard(s) used
SHA256 sum of file contents (whitespacing intact, to maintain line numbers)
--severity-level parameter
--config-set key value setting
This way we could make use of the results for each file, again and again, without re-running PHPCS.
Many repositories can potentially include the same code, which should not be scanned again (in theory). Is there a benefit of caching results of PHPCS scans for individual files?
Example: Using PHPCS to scan itself using the
PHPCompatibilityWP
standard took 47 seconds (on a VM, Intel i7). Only around 800 PHP files. Reading the results from a file for each file scanned would have taken a lot less than that.Caching key (a hash) could make use of the following factors, combined:
--severity-level
parameter--config-set key value
settingThis way we could make use of the results for each file, again and again, without re-running PHPCS.