kubearmor / KubeArmor

Runtime Security Enforcement System. Workload hardening/sandboxing and implementing least-permissive policies made easy leveraging LSMs (BPF-LSM, AppArmor).
https://kubearmor.io/
Apache License 2.0
1.5k stars 344 forks source link

Kubearmor Relay not configured (enable log) when using helm or KubeArmorConfig #1866

Open henrikrexed opened 1 month ago

henrikrexed commented 1 month ago

Bug Report

When installing kubearmor-operator , i'm trying to enable to log produced by the ENABLE_STDOUT_LOGS on kubearmor-relay. My intention is to use fluentbit or the opentelemetry collector to collect the various kubearmor events.

I have used a modified values.yaml file by enabling :

kubearmorConfig:
  defaultCapabilitiesPosture: audit
  defaultFilePosture: audit
  defaultNetworkPosture: audit
  defaultVisibility: process,network,file,capability
  enableStdOutLogs: true
  enableStdOutAlerts: true
  enableStdOutMsgs: true

But the expected configuraiton is not done.

I have also tried by deploying a KubeArmorConfig with the similar settings..but kuebarmor-relay has still the ENABLE_STDOUT_LOGS set to false.

General Information

To Reproduce

  1. Instruction 1
    helm upgrade --install kubearmor-operator kubearmor/kubearmor-operator -n kubearmor --create-namespace -f kubearmor/values.yaml

    or

helm upgrade --install kubearmor-operator kubearmor/kubearmor-operator -n kubearmor --create-namespace 
kubectl apply -f kubearmor/kubeArmorConfig.yaml

here is the kubeArmorconfig:


apiVersion: operator.kubearmor.com/v1
kind: KubeArmorConfig
metadata:
  labels:
    app.kubernetes.io/name: kubearmorconfig
    app.kubernetes.io/instance: kubearmorconfig-sample
    app.kubernetes.io/part-of: kubearmoroperator
    app.kubernetes.io/managed-by: kustomize
    app.kubernetes.io/created-by: kubearmoroperator
  name: kubearmorconfig-default
  namespace: kubearmor
spec:
  defaultCapabilitiesPosture: audit
  defaultFilePosture: audit
  defaultNetworkPosture: audit
  defaultVisibility: process,network, network, capabilities
  enableStdOutLogs: true
  enableStdOutAlerts: true
  enableStdOutMsgs: true
  seccompEnabled: false
  alertThrottling: false
  maxAlertPerSec: 10
  throttleSec: 30
  kubearmorImage:
    image: kubearmor/kubearmor:stable
    imagePullPolicy: Always
  kubearmorInitImage:
    image: kubearmor/kubearmor-init:stable
    imagePullPolicy: Always
  kubearmorRelayImage:
    image: kubearmor/kubearmor-relay-server
    imagePullPolicy: Always
  kubearmorControllerImage:
    image: kubearmor/kubearmor-controller
    imagePullPolicy: Always
```yaml 

**Expected behavior**

the kubearmor-relay deploymnet should have ENABLE_STDOUT_LOGS set to true
kareem-DA commented 1 week ago

I am seeing this problem as well. It feels like a race condition. I have this scripted for deployment. I am also using fluxcd to deploy kubearmor, followed by

apiVersion: v1
kind: Namespace
metadata:
  name: kubearmor
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
  name: kubearmor
  namespace: kubearmor
spec:
  interval: 5m
  url: https://kubearmor.github.io/charts
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
  name: kubearmor
  namespace: kubearmor
spec:
  chart:
    spec:
      chart: kubearmor-operator
      interval: 5m
      sourceRef:
        kind: HelmRepository
        name: kubearmor
      version: v1.4.0
  driftDetection:
    mode: enabled
  install:
    remediation:
      retries: 3
  interval: 10m
  releaseName: kubearmor-operator
  timeout: 5m
  upgrade:
    remediation:
      retries: 3
---
apiVersion: operator.kubearmor.com/v1
kind: KubeArmorConfig
metadata:
  labels:
    app.kubernetes.io/created-by: kubearmoroperator
    app.kubernetes.io/instance: kubearmorconfig-sample
    app.kubernetes.io/managed-by: kustomize
    app.kubernetes.io/name: kubearmorconfig
    app.kubernetes.io/part-of: kubearmoroperator
  name: kubearmorconfig-default
  namespace: kubearmor
spec:
  alertThrottling: true
  defaultCapabilitiesPosture: audit
  defaultFilePosture: audit
  defaultNetworkPosture: audit
  defaultVisibility: process,network
  enableStdOutAlerts: true
  enableStdOutLogs: false
  enableStdOutMsgs: false
  kubearmorControllerImage:
    image: kubearmor/kubearmor-controller
    imagePullPolicy: Always
  kubearmorImage:
    image: kubearmor/kubearmor:stable
    imagePullPolicy: Always
  kubearmorInitImage:
    image: kubearmor/kubearmor-init:stable
    imagePullPolicy: Always
  kubearmorRelayImage:
    image: kubearmor/kubearmor-relay-server
    imagePullPolicy: Always
  maxAlertPerSec: 10
  seccompEnabled: false
  throttleSec: 30

After the initial deployment, the setting for the environment variables for the relay pod are always all false. If I make a change to the config (either by pushing an updated config, or editing the config on the cluster with 'kubectl edit ...') after the entire deployment is up, the enviornment variables will get updated the relay pod.

rksharma95 commented 6 days ago

@henrikrexed i'm not able to reproduce the issue with current stable. can you please check again?

rksharma95 commented 6 days ago

@kareem-DA with flux or any gitops the initial configuration would be desired state right! so any manual change to KubeArmorConfig would be reverted in next reconcilation. let me know if i'm missing out anything here.

rksharma95 commented 6 days ago

@kareem-DA thanks for the clarification over slack discussion, i will try again to reproduce the issue and will revert back here.