Open bitkeks opened 4 months ago
We have reviewed and tested the available official containers for the proposed tools.
ProjectDiscovery (Naabu, httpx, nuclei) => DockerHub site. Official containers are updated on each mayor release of the tools, ensuring that they are being run in their latest versions. They are fully functional and integrable into the proposed pipeline.
OWASP ZAP => Zap Docker. There are 4 active official containers: stable, weekly, nightly and bare. We found that the most reliable ones are weekly, as it provides core and addon updates in a weekly basis (avoiding problems of everyday builds) and bare, as it is a small version of the stable one , ideal for CI environments, but lacking testing scripts.
Greenbone Community Edition => Containers. Greenbone Community Edition does not run on a single container, in fact, it is a distributed service architecture, with each service running in a dedicated container. This could be a problem to integrate the tool in the pipeline due to its complexity and size, especially regarding its feed updates that are available on a daily basis. At this point, we are investigating three possible solutions:
Additionally, we suggest to expand the proposed methodology and define two pipelines based on the tests done with the tools:
Regarding Greenbone / OpenVAS:
At this point, we are investigating three possible solutions:
Please follow solution 1, deploy it as-is from Greenbone.
Since we use full VMs, the creation of the GVM from Docker Compose should work. Then only the persistence of feed data is to be solved.
Please also evaluate the re-use of VMs in this pipeline, maybe we can even skip the creation of the VM from scratch.
Following the abovementioned solution and going deeper through the available Greenbone documentation on containers workflow, we have found that feeds are provided in their own container images, which are updated every 24 hours (as can be seen in Docker Hub logs).
With this in mind, when the pentesting pipeline job is triggered and Greenbone container architecture is deployed, via its docker compose file, latest images versions are pulled, hence including feed updates for that day.
However it still takes a bit of time for deployed services daemons to load the feeds into memory. As per our tests in a controlled environment, this time ranges from 15 to 30 minutes, time which we think is more than acceptable and should let us get rid of mantain persistent volumes or use extra pipelines to update the feeds.
The containers with feed updates are the following:
Great work, thanks!
This looks acceptable to me as well. Let's continue with this approach and build the pipeline without persistent container storage.
We'll see how it works out in SCS Zuul when deployed. The internal SCS registry can cache the feed containers as well.
Tests are being performed wit ZAP, as containerized scans (both proposed Baseline and Full) only allow to perform a scan against a single host/target. We are investigating if creating a specific scanning plan through ZAP automation framework could handle this or if we should launch a dedicated containerized scan for each defined target.
Regarding Greenbone, we have observed that the best approach to fit our needs is to interact with gvmd daemon using the python-gvm library, which allows to interact with the ospd-openvas scanner leveraging the Greenbone Management Protocol (using gvm-tools could be an alternative too).
We are investigating if creating a specific scanning plan through ZAP automation framework could handle this or if we should launch a dedicated containerized scan for each defined target.
lgtm!
is to interact with gvmd daemon using the python-gvm library
lgtm!
DefectDojo instance has been configured in order to define and create "products", "engagements" and "tests", as well as API integrations as follows (subject to changes):
Pipeline in our test environment has been modified to export scan results to this DefectDojo instance. This will be commited to security-infra-scan-pipeline repository as soon as we confirm that everything work as expected, removing scan results from pipeline output.
In other topic, DefectDojo user authentication can be performed with Keycloak (as OAuth2 provider) using the social-auth plugin. In order for this to work, a client of type openid-connect should be created in the keycloak realm. It could be worth it to use the available SCS authentication system to allow users to login to the instance.
Great! Is the test instance publicly accessible? Maybe we can use Github as login source?
Great! Is the test instance publicly accessible?
Yes, as it is hosted in our assigned project space. However, as for now and testing purposes, it only has administrative access. Our final goal is to add to the repository both terraform and ansible scripts to allow everyone to deploy the same architecture.
Maybe we can use Github as login source?
The documentation only mentions Github Enterprise, but we are investigating in parallel with other tasks whether it is possible to use Github as an OAuth2 mechanism.
David sent me some credentials for testing. We'll skip Github auth, not needed for the demo setup at the moment
As an operator, I want to run the security pipeline tools in containers so that they can easily be deployed in a CI/CD pipeline like Zuul.
Belongs to epic #525.
Definition of Ready:
Definition of Done: