Closed pfaelzerchen closed 7 months ago
Another variants are:
Error: bad file '/etc/crowdsec/local_api_credentials.yaml': yaml: control characters are not allowed
Error: bad file '/etc/crowdsec/local_api_credentials.yaml': yaml: line 2: could not find expected ':'
time="20-12-2023 15:28:02" level=fatal msg="/etc/crowdsec/config.yaml: yaml: control characters are not allowed"
Error: bad file '/etc/crowdsec/config.yaml': yaml: control characters are not allowed
Both error messages may change from pod restart to pod restart.
It can also be that only 1 of the three pods is affected. It seems a bit random to me.
It seems, it was because of the configuration of the agent secrets in values.yaml. I also tried to switch to agent-credentials secret without any luck. Switching to TLS auth as described in https://www.crowdsec.net/blog/integrating-crowdsec-kubernetes-tls was successful. All agent pods reliably come up.
I recognized a strange behaviour with my helm deployment of crowdsec. It runs on a 3 node k3s setup.
After a successful deployment, crowdsec is running fine. Then I change something to the values.yaml, e.g. version tag to upgrade, and redeploy. Then sometimes 1 pod comes up successfully. The other 2 are stopped with errors. The log file states:
When I delete the complete deployment and redeploy it, everything comes up fine. But now, I have to reregister bouncers and configure a re-enrollment in the crowdsec console.
At this state, I am not able to log into the running container to see what the actual config.yaml was.