falcosecurity / plugins

Falco plugins registry
Apache License 2.0
86 stars 80 forks source link

Document k8smeta in falco's helm-chart with an MWE #513

Closed fjellvannet closed 5 months ago

fjellvannet commented 5 months ago

What to document I would like a minimal working example (MWE) showing how to set up k8smeta and k8s-metacollector using the official helm chart.

I used the command indicated in the Falco Helm chart README.md in line 269 to enable Falco + metacollector + k8smeta.

As a result, the k8s-metacollector is deployed, and the logs in the falco-pods indicate that the k8smeta plugin is activated as well. There are no error-messages indicating something is wrong in the pod logs. Only this warning from Helm is shown after running the Helm command: It seems you are loading the following plugins [k8smeta], please make sure to install them by adding the correct reference to falcoctl.config.artifact.install.refs: [falco-rules:3 ghcr.io/falcosecurity/plugins/plugin/k8smeta:0.1.0] However, it does not seem to be a problem as the k8smeta plugin is installed according to the falco Pod logs.

But the enrichment itself does not work - the events that are generated based on the default syslog source (nothing else is activated) are still not enriched with any k8s metadata.

Based on k8s meta's README.md, I tried to this section my the values.yaml file I use along with the Helm install command:

load_plugins: [k8smeta]

collectors:
  kubernetes:
    enabled: true

plugins:
  - name: k8smeta
    library_path: libk8smeta.so
    init_config:
      collectorHostname: falco-k8s-metacollector.falco.svc.cluster.local
      # I checked that this environment variable is being set in the falco-pod by default
      nodeName: ${FALCO_HOSTNAME}
      collectorPort: 45000

With or without this addition, no metadata is added to my logs. So I would like to request a minimal working example showing how to set this up in Helm, as just adding collectors.kubernetes.enabled=true does not seem to be enough.

As both plugins are still relatively new, I could not find such an example elsewhere. So either an example or some more documentation explaining what to do in detailed steps.

fjellvannet commented 5 months ago

I figured it out - turns out that the config given in the documentation actually works:

collectors:
  kubernetes:
    enabled: true

It deploys the k8smetacollector and k8smeta, not requiring further configuration. And they do work and did all the time.

The problem I had here was a misunderstanding: My underlying problem was that the container-fields (container.id, pod.name, etc. ) were not filled in my events, and I thought I had to add k8smeta to fix it. It turns out that k8smeta worked fine all the time, and my underlying issue was another one: I was using microk8s and therefore, the containerd.sock could not be found as it is in /var/snap/microk8s/common/run instead of /run/containerd/. If you experience similar issues using microk8s, run the following script after the helm upgrade command:

#!/bin/bash

# Replace <name> with your DaemonSet's name
DAEMONSET_NAME="falco"

# Find the index of the 'containerd-socket' volume
INDEX=$(kubectl -n falco get daemonset "$DAEMONSET_NAME" -o json | jq '.spec.template.spec.volumes | map(.name) | index("containerd-socket")')

# Check if the volume was found
if [ "$INDEX" = "null" ]; then
    echo "Volume 'containerd-socket' not found."
    exit 1
fi

# Construct the JSON Patch
PATCH="[{\"op\": \"replace\", \"path\": \"/spec/template/spec/volumes/$INDEX/hostPath/path\", \"value\": \"/var/snap/microk8s/common/run\"}]"

# Apply the patch
kubectl patch daemonset "$DAEMONSET_NAME" --type='json' -p="$PATCH"

It changes the path the falco-containers use to mount the containerd-socket to /var/snap/microk8s/common/run where the microk8s containerd socket is placed. That resolved the issues I had at least. If you have questions about this, feel free to contact me.