Open fmigneault opened 1 year ago
Oh dang, how did you trigger this security scan? Can we scan a few more? Like the Thredds image and our Jupyter image?
However, this issue should have been opened on Kartoza side since the custom image I build is simply the cache of Kartoza image with a few minor fixes, here's the Dockerfile: https://github.com/tlvu/docker-geoserver/blob/2e5dbb99effa75abe818cdf551532f2b1de7739c/Dockerfile.custom
Therefore, the following plugin would not be required and could be removed: https://github.com/kartoza/docker-geoserver/blob/a433c2d16729a52dbd82ebfd52db67ac93a6579b/build_data/stable_plugins.txt#L16
Oh they simply bundle all the known plugins in the docker image. It does not means the plugin is enabled.
I have a feeling all these security issues are due to the various plugins, not Geoserver itself.
I'm monitoring all these references (using https://app.snyk.io/, same thing that runs on DockerHub):
Projects are duplicated to ensure Docker-based and GitHub-based scans have higher chances to catch vulnerabilities.
We can definitely add more. I would consider generating a birdhouse-deploy
specific org to add all images from the stack.
I use my own fmigneault
org to focus mainly on items I work on.
Oh they simply bundle all the known plugins in the docker image. It does not means the plugin is enabled. I have a feeling all these security issues are due to the various plugins, not Geoserver itself.
Indeed. We could be more selective.
An update of the base tomcat
image came out that could fix some issues:
If this was pushed in a kartoza/geoserver:2.23.x
image, we should rebuild with the latest updates
https://github.com/kartoza/docker-geoserver/blob/a433c2d16729a52dbd82ebfd52db67ac93a6579b/Dockerfile#L6-L8
If we use the procedure defined in https://github.com/kartoza/docker-geoserver#building-with-a-specific-version-of--tomcat, we don't even need to wait for them to update the base image to obtain the fix. However, using the 2.23.x
changes instead of 2.22.2
could still be relevant.
I found that there still is this Dockerfile in birdhouse-deploy: https://github.com/bird-house/birdhouse-deploy/blob/master/birdhouse/config/geoserver/Dockerfile
I think it is deprecated and should be removed.
What is the correct approach to rebuild the current GEOSERVER_IMAGE
in
https://github.com/bird-house/birdhouse-deploy/blob/1981a1dc4397a87c32c87d9a723b3e8b88dda254/birdhouse/config/geoserver/default.env#L11-L14
Do we have some documentation/procedure to update it? I think a README in https://github.com/bird-house/birdhouse-deploy/tree/master/birdhouse/config/geoserver with references to relevant https://github.com/kartoza/docker-geoserver, https://hub.docker.com/r/pavics/geoserver and build procedure would be very helpful.
What is the correct approach to rebuild the current
GEOSERVER_IMAGE
We do not rebuild Kartoza image, we just cached it. It was rebuild this time only because the base image was missing 2 plugins we need and it did not have have PR I sent to them to allow context-root change. All of these are supposed to be fixed in the newer 2.23.0 image so we can simply use the 2.23.0 image straight.
We already cached 2.23.0 so if you want to try, you can simply set GEOSERVER_TAGGED=2.23.0-kartoza-build20230405
.
I found that there still is this Dockerfile in birdhouse-deploy: https://github.com/bird-house/birdhouse-deploy/blob/master/birdhouse/config/geoserver/Dockerfile I think it is deprecated and should be removed.
Exact. This was before 2.19, so now that Dockerfile and all related files can be deleted.
Are you able to trigger a scan on pavics/geoserver:2.23.0-kartoza-build20230405
to see if the new tomcat base is there. Else we'll upgrade for nothing.
We already cached 2.23.0 so if you want to try, you can simply set
GEOSERVER_TAGGED=2.23.0-kartoza-build20230405
.
If it's working on Ouranos' side, I propose we directly update the default.env
with it. You are most probably testing it more in depth than we do.
I'll add it to the scan, and report back anything that comes up.
I've also started the scan directly in https://hub.docker.com/layers/pavics/geoserver/2.23.0-kartoza-build20230405/images/sha256-98eee4fc9c46fca45c9f56961f94366dacb65f65fa11d377ce337ba1a31683a1?context=explore, since DockerHub offers the same Snyk analysis out of the box.
I think this is the result of your scan, so is the base image updated?
@tlvu
Yes, that's the result of the scan.
It seems some updates were applied, but there are critical items that are still there.
Notably, the log4j/log4j 1.2.17
vulnerability is back in there.^
Inside the built container, TOMCAT_VERSION=9.0.73
is defined.
So it seems that matches the latest from the kartoza repo, but 9.0.74
is available, and this one is marked as resolution of a few security items flagged in the scan.
Description
I ran a security vulnerabilities scan against the new
pavics/geoserver:2.22.2-kartoza-build20230226-r9-allow-change-context-root-and-fix-missing-stable-plugins-and-avoid-chown-datadir
.The following vulnerabilities were detected:
:red_square: Critical:
:orange_square: High:
:yellow_square: Medium and below: ignored or duplicate of others above
For the most of them, updating latest packages (minor or patch revision) seems sufficient to fix the issues.
For
H2
database related issues, the fix requires a major change (1.1.119
->2.1.210
). Maybe this can work out of the box.Otherwise, H2 seems to be used for Disk Quota, which can be switched to PostgreSQL, which we use for GeoServer. https://github.com/kartoza/docker-geoserver#enable-disk-quota-storage-in-postgresql-backend
Therefore, the following plugin would not be required and could be removed: https://github.com/kartoza/docker-geoserver/blob/a433c2d16729a52dbd82ebfd52db67ac93a6579b/build_data/stable_plugins.txt#L16
References
Concerned Organizations
All