SovereignCloudStack / issues

This repository is used for issues that are cross-repository or not bound to a specific repository.
https://github.com/orgs/SovereignCloudStack/projects/6
2 stars 1 forks source link

Containerize the security pipeline tools #526

Open bitkeks opened 4 months ago

bitkeks commented 4 months ago

As an operator, I want to run the security pipeline tools in containers so that they can easily be deployed in a CI/CD pipeline like Zuul.

Belongs to epic #525.

Definition of Ready:

Definition of Done:

90n20 commented 4 months ago

We have reviewed and tested the available official containers for the proposed tools.

Additionally, we suggest to expand the proposed methodology and define two pipelines based on the tests done with the tools:

bitkeks commented 4 months ago

Regarding Greenbone / OpenVAS:

At this point, we are investigating three possible solutions:

Please follow solution 1, deploy it as-is from Greenbone.

Since we use full VMs, the creation of the GVM from Docker Compose should work. Then only the persistence of feed data is to be solved.

Please also evaluate the re-use of VMs in this pipeline, maybe we can even skip the creation of the VM from scratch.

90n20 commented 4 months ago

Following the abovementioned solution and going deeper through the available Greenbone documentation on containers workflow, we have found that feeds are provided in their own container images, which are updated every 24 hours (as can be seen in Docker Hub logs).

With this in mind, when the pentesting pipeline job is triggered and Greenbone container architecture is deployed, via its docker compose file, latest images versions are pulled, hence including feed updates for that day.

However it still takes a bit of time for deployed services daemons to load the feeds into memory. As per our tests in a controlled environment, this time ranges from 15 to 30 minutes, time which we think is more than acceptable and should let us get rid of mantain persistent volumes or use extra pipelines to update the feeds.

The containers with feed updates are the following:

bitkeks commented 4 months ago

Great work, thanks!

This looks acceptable to me as well. Let's continue with this approach and build the pipeline without persistent container storage.

We'll see how it works out in SCS Zuul when deployed. The internal SCS registry can cache the feed containers as well.

90n20 commented 3 months ago

Tests are being performed wit ZAP, as containerized scans (both proposed Baseline and Full) only allow to perform a scan against a single host/target. We are investigating if creating a specific scanning plan through ZAP automation framework could handle this or if we should launch a dedicated containerized scan for each defined target.

Regarding Greenbone, we have observed that the best approach to fit our needs is to interact with gvmd daemon using the python-gvm library, which allows to interact with the ospd-openvas scanner leveraging the Greenbone Management Protocol (using gvm-tools could be an alternative too).

bitkeks commented 3 months ago

We are investigating if creating a specific scanning plan through ZAP automation framework could handle this or if we should launch a dedicated containerized scan for each defined target.

lgtm!

is to interact with gvmd daemon using the python-gvm library

lgtm!

90n20 commented 3 months ago

DefectDojo instance has been configured in order to define and create "products", "engagements" and "tests", as well as API integrations as follows (subject to changes):

Pipeline in our test environment has been modified to export scan results to this DefectDojo instance. This will be commited to security-infra-scan-pipeline repository as soon as we confirm that everything work as expected, removing scan results from pipeline output.

image

image

image

In other topic, DefectDojo user authentication can be performed with Keycloak (as OAuth2 provider) using the social-auth plugin. In order for this to work, a client of type openid-connect should be created in the keycloak realm. It could be worth it to use the available SCS authentication system to allow users to login to the instance.

bitkeks commented 3 months ago

Great! Is the test instance publicly accessible? Maybe we can use Github as login source?

90n20 commented 2 months ago

Great! Is the test instance publicly accessible?

Yes, as it is hosted in our assigned project space. However, as for now and testing purposes, it only has administrative access. Our final goal is to add to the repository both terraform and ansible scripts to allow everyone to deploy the same architecture.

Maybe we can use Github as login source?

The documentation only mentions Github Enterprise, but we are investigating in parallel with other tasks whether it is possible to use Github as an OAuth2 mechanism.

bitkeks commented 2 months ago

David sent me some credentials for testing. We'll skip Github auth, not needed for the demo setup at the moment