The tests in scancode-results-analyzer now depends on check_json_scan and run_scan_click from scancode.cli_test_utils to run scans and verify that license detection issues are bring caught, classified and reported properly.
But whenever license rules are added to the scancode rules, to solve these issues, these tests fail consequently and thus the test expectation files require regular updates when the scancode license rules are updated.
If we use a custom license index build-out of a handful of rules only, and use that in run_scan_click, then this could be solved.
Create a custom license index build-out of a handful of rules, for live scans.
Also, add files from the timescale repository (after changing from timescale -> some other name that won't be a license name ever), as tests for the summary and get_unique_license functions. https://github.com/nexB/scancode-toolkit/issues/2268
Create scripts to regenerate test expectations.
Add readme files for tests, how the test files were obtained.
The tests in
scancode-results-analyzer
now depends oncheck_json_scan
andrun_scan_click
fromscancode.cli_test_utils
to run scans and verify that license detection issues are bring caught, classified and reported properly.But whenever license rules are added to the scancode rules, to solve these issues, these tests fail consequently and thus the test expectation files require regular updates when the scancode license rules are updated.
If we use a custom license index build-out of a handful of rules only, and use that in
run_scan_click
, then this could be solved.Create a custom license index build-out of a handful of rules, for live scans.
Also, add files from the
timescale
repository (after changing from timescale -> some other name that won't be a license name ever), as tests for the summary and get_unique_license functions. https://github.com/nexB/scancode-toolkit/issues/2268Create scripts to regenerate test expectations.
Add readme files for tests, how the test files were obtained.