Closed thoefkens closed 12 months ago
The actual pod start reports this: time="16-11-2023 19:48:47" level=warning msg="No matching files for pattern /var/log/containers/ingress-controller-wbg-test-ingress-nginx-controllerwbg-test*.log" type=file time="16-11-2023 19:48:47" level=info msg="Starting processing data"
Even though the /var/log/containers has these files: ingress-controller-wbg-test-ingress-nginx-controller-766dd266l5_wbg-test_controller-c34f862b8469cc11bf7b9649d7200ea229e986ac8a4a64772e4c06a48c0a16af.log ingress-controller-wbg-test-ingress-nginx-controller-766dd266l5_wbg-test_mod-security-logger-ab5c16d6e8509de977ccb2fe09845a3de46acd456649a305716262a6945e4cfd.log
And the pattern was specified as: ingress-controller-wbg-test-ingress-nginx-controller-*
like in the blog post.
Hi all,
We deploy the chart into our cluster, this is our config:
spec: project: wbg source: repoURL: 'https://crowdsecurity.github.io/helm-charts' targetRevision: 0.9.9 chart: crowdsec helm: releaseName: wbg-test-crowdsec values: | container_runtime: containerd agent: nodeSelector: nodepool: newhostingnodes acquisition:
The config on the deployed agent pods is then: Crowdsec:
And the content of acquisition: filenames:
Yet the actual log files are named like this: ingress-controller-wbg-test-ingress-nginx-controller-766dd266l5_wbg-test_controller-c34f862b8469cc11bf7b9649d7200ea229e986ac8a4a64772e4c06a48c0a16af.log ingress-controller-wbg-test-ingress-nginx-controller-766dd266l5_wbg-test_mod-security-logger-ab5c16d6e8509de977ccb2fe09845a3de46acd456649a305716262a6945e4cfd.log
Would this not cause the file names not to match the pattern? As you can see the ingress-nginx logs have an id attached to them before the namespace is attached.