Closed alacuku closed 1 year ago
I think that we should use always the approach with init-container with the helm charts and deprecate the usage of the falcosecurity/falco
image. It would make it easy to simplify the init-container logic as described in: #418.
cc @leogr
In the third case, we set
driver.loader.enabled
totrue
but disable theinitContainer
:helm install falco \ --set image.repository=falcosecurity/falco \ --set driver.loader.enabled=true \ --set driver.loader.initContainer.enabled=false \ falcosecurity/falco
We are removing the initContainer, but the logic of the
falco-driver-loader
script inside the Falco container should be triggered. We still get the error:Warning Failed 4s kubelet Error: failed to generate container "f41374f044c4a77e9a052ce42445fe1b0a962e3fbde6ae25e32b49e970734b04" spec: failed to generate spec: failed to mkdir "/sys/module/falco": mkdir /sys/module/falco: operation not permitted
In order to resolve the mount issue without an init container we should mount the whole
/sys/module
folder inside the Falco container. And that is not a best practice since it is mounted in ReadWrite mode.
I totally agree it's not a best practice, but not supporting it could break legacy use cases. I'm not completely contrary to getting rid of the third case, but - before removing it - we should make sure nobody needs it anymore.
N.B. I know it does not work with the latest chart version, but it worked with the 2.0.0 AFAIK.
cc @falcosecurity/core-maintainers wdyt?
I agree with @alacuku, using just one approach (the init container one) would be easier for everyone, for us to maintain and for the user to avoid complexity! Moreover, actually, the falcosecurity/falco
image is not working due to the mount issue described above so probably nobody is using it or needs it :thinking: It could be the right moment to remove it
If we remove the falcosecurity/falco
image we could remove also this block since drivers always need the falco-driver-loader to be installed https://github.com/falcosecurity/charts/blob/06225b85abae5beb77e1a11cf0d393a8b1bb8ce8/falco/templates/pod-template.tpl#L133-L146
Describe the bug The
falcosecurity
organizaton provides two different image flavours for Falco:Starting from version
2.0.0
of the charts thefalcosecurity/falco-no-driver
is the default image. As far as I know, we are still supporting thefalcosecurity/falco
image in our charts.With the latest charts, v2.2.0, I get errors when using the
falcosecurity/falco
image.In the first case, using the default values of the chart and just setting the
falcosecurity/falco
image:We have both the variables
driver.loader.enabled
anddriver.loader.initContainer.enabled
set totrue
. The init container correctly downloads/builds the kernel module for our system but thefalco-driver-loader
script runs inside the Falco container and fails with:That happens because we are not mounting the
/etc/
folder in the Falco container causing the script to fail. And it's ok because we are using the init container: https://github.com/falcosecurity/charts/blob/06225b85abae5beb77e1a11cf0d393a8b1bb8ce8/falco/templates/pod-template.tpl#L133-L146In the second case, we set to
false
bothdriver.loader.enabled
anddriver.loader.initContainer.enabled
:The Falco pods does not start, and the error is:
In the latest version of the chart we add a new volume mount:
/sys/module/falco
. For more info why it is needed see here: https://github.com/falcosecurity/falco/pull/2214. The/sys/module/falco
folder is available only after the kernel module has been loaded.In the third case, we set
driver.loader.enabled
totrue
but disable theinitContainer
:We are removing the initContainer, but the logic of the
falco-driver-loader
script inside the Falco container should be triggered. We still get the error:In order to resolve the mount issue without an init container we should mount the whole
/sys/module
folder inside the Falco container. And that is not a best practice since it is mounted in ReadWrite mode.How to reproduce it Follow the steps described above. Expected behaviour The Falco pods should be up and running after the deployment.