So the background context of this ticket is that I am trying to integrate https://github.com/albuch/sbt-dependency-check into Pekko https://github.com/apache/incubator-pekko/pull/289 . Originally the integration is meant to be a simple one that just involved adjusting a couple of settings (i.e. output directory) and having to manually generate the report using dependencyCheckAggregate however it was suggested in the PR that rather than us having to manually generate the report that instead we can have it as part of our CI pipeline.
The obvious core problem here is that downloading/generating the NVD database takes a huge amount of time, so my first general question is whether doing this makes general sense and as a follow up question how would you recommend caching the database so it doesn't have to be downloaded every single time? I noticed that the database seems to be stored in the standard coursier local repository which means if I understand correctly it should be handled fine by the standard github actions cache action for sbt projects i.e. https://github.com/coursier/cache-action.
So the background context of this ticket is that I am trying to integrate https://github.com/albuch/sbt-dependency-check into Pekko https://github.com/apache/incubator-pekko/pull/289 . Originally the integration is meant to be a simple one that just involved adjusting a couple of settings (i.e. output directory) and having to manually generate the report using
dependencyCheckAggregate
however it was suggested in the PR that rather than us having to manually generate the report that instead we can have it as part of our CI pipeline.More specifically this would mean that we would run
dependencyCheckAggregate
whenever our docs are generated using source generators (i.e. https://developer.lightbend.com/docs/paradox/current/customization/generators.html#generating-pages-with-code) and then integrate the generated report into the docs.The obvious core problem here is that downloading/generating the NVD database takes a huge amount of time, so my first general question is whether doing this makes general sense and as a follow up question how would you recommend caching the database so it doesn't have to be downloaded every single time? I noticed that the database seems to be stored in the standard coursier local repository which means if I understand correctly it should be handled fine by the standard github actions cache action for sbt projects i.e. https://github.com/coursier/cache-action.