Closed soknang-khna closed 9 months ago
Seeing the same issue here.
Looking at the file on the NFS volume:
/filebeat/etc/filebeat
-rw-r--r-- 1 root wheel 740B Jan 23 10:38 filebeat.yml
And the PVC's on the NFS:
drwxrwxrwx 3 1000 1000 4.0K Jan 23 09:01 wazuh-wazuh-indexer-wazuh-indexer-0-pvc-294e632c-6490-4fb5-bdcb-f5d65efcc11e
drwxrwxrwx 3 1000 1000 4.0K Jan 6 2023 wazuh-wazuh-indexer-wazuh-indexer-0-pvc-30eb65c7-3d93-4050-8ca8-2968771f304d
drwxrwxrwx 4 root wheel 4.0K Jan 6 2023 wazuh-wazuh-manager-master-wazuh-manager-master-0-pvc-4b8eef7f-fad9-4742-af42-8de6d0e51aba
drwxrwxrwx 4 root wheel 4.0K Jan 23 09:01 wazuh-wazuh-manager-master-wazuh-manager-master-0-pvc-cd657ab1-bb99-4b3d-bd81-f99c4fbcce90
drwxrwxrwx 4 root wheel 4.0K Jan 23 09:01 wazuh-wazuh-manager-worker-wazuh-manager-worker-0-pvc-b05955e3-4bb5-49e2-a995-f36b89a5ee93
drwxrwxrwx 4 root wheel 4.0K Jan 6 2023 wazuh-wazuh-manager-worker-wazuh-manager-worker-0-pvc-f4b864c7-6a59-4dd3-a335-777eb05e45b1
And looking at the files within the pod itself while running:
filebeat folder in /etc
drwxr-xr-x 3 nobody 4294967294 4.0K Jan 23 15:55 filebeat
root@wazuh-manager-master-0:/etc/filebeat# ls -lah
total 480K
drwxr-xr-x 3 nobody 4294967294 4.0K Jan 23 15:53 .
drwxr-xr-x 1 root root 4.0K Jan 23 15:53 ..
-rw-r--r-- 1 nobody 4294967294 291K Jan 12 2021 fields.yml
-rw-r--r-- 1 nobody 4294967294 90K Jan 12 2021 filebeat.reference.yml
-rw-r--r-- 1 nobody 4294967294 740 Jan 23 15:53 filebeat.yml
drwxr-xr-x 2 nobody 4294967294 4.0K Jan 18 20:55 modules.d
-rw------- 1 nobody 4294967294 62K Jan 1 1970 wazuh-template.json
I seem to have solved this for my scenario. Some may help you.
I'm using nfs-subdir-external-provisioner. In NFS4, which is default, it uses idmap. You can probably get away with reconfiguring your export to not squash the root FS.
Otherwise, what I did was force it to use NFSv3. In envs/local-env/storage-class.yaml, set the mount option
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: wazuh-storage
provisioner: cluster.local/nfs-subdir-external-provisioner
mountOptions:
- nfsvers=3
I deleted the kustomization and re-deployed it so that it would update the storage class and mount as nfsv3.
That fixed the issues with the manager-master and manager-worker pods not coming up due to the filebeat issue.
However that caused a new issue with the indexer pod. It runs an init pod to set the permissions. With NetApp, the directories have a built-in .snapshot folder, which the init pod was not allowed to change permissions on.
I had to go into the NetApp and disable snapdir access on the volume to prevent it from being seen:
volume modify -volume k8s_vols -snapdir-access false
Once that was done and the indexer pod restarted, everything came up as expected.
I seem to have solved this for my scenario. Some may help you.
I'm using nfs-subdir-external-provisioner. In NFS4, which is default, it uses idmap. You can probably get away with reconfiguring your export to not squash the root FS.
Otherwise, what I did was force it to use NFSv3. In envs/local-env/storage-class.yaml, set the mount option
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: wazuh-storage provisioner: cluster.local/nfs-subdir-external-provisioner mountOptions: - nfsvers=3
I deleted the kustomization and re-deployed it so that it would update the storage class and mount as nfsv3.
That fixed the issues with the manager-master and manager-worker pods not coming up due to the filebeat issue.
However that caused a new issue with the indexer pod. It runs an init pod to set the permissions. With NetApp, the directories have a built-in .snapshot folder, which the init pod was not allowed to change permissions on.
I had to go into the NetApp and disable snapdir access on the volume to prevent it from being seen:
volume modify -volume k8s_vols -snapdir-access false
Once that was done and the indexer pod restarted, everything came up as expected.
Thank for your information, Now my wazuh is working. thank you !
Dear Wazuh support, When I using nfs volume I got the issue as below that make pods are CrashLoopBackOff: Exiting: error loading config file: config file ("/etc/filebeat/filebeat.yml") must be owned by the user identifier (uid=0) or root Filebeat exited. code=1
Thank you !