Closed yaroslav-nakonechnikov closed 11 months ago
same thing happened in etc/system/local/server.conf:
[splunk@splunk-prod-cluster-manager-0 splunk]$ cat etc/system/local/server.conf | grep "\[imds\]" -A 3
[imds]
imds_version = v2
imds_version
and etc/system/local/web.conf
[splunk@splunk-prod-cluster-manager-0 splunk]$ cat etc/system/local/web.conf | grep "\[settings\]" -A 3
[settings]
mgmtHostPort = 0.0.0.0:8089
enableSplunkWebSSL = True
enableSplunkWebSSL
so, each file, which was defined in conf
section is broken.
kubectl delete pod
- initiates recreation of pod, and all seems fine.
But we want to find root cause, as this can happen anywhere!
unmasked diag uploaded in case #3285863
i found how i can replicate issue: delete/stop/whatever with splunk process in pod and in sometime liveness probe will trigger restart of pod and after that you'll see broken config
@iaroslav-nakonechnikov we are looking into this issue now, will update you with our findings.
issue still exist in 9.1.1
@yaroslav-nakonechnikov , we are working with splunk-ansible team to fix this issue. will update you once that is done.
was it fixed?
Hi @yaroslav-nakonechnikov , this fix didnt go in 9.1.1 . its planned for 9.1.2 . will update you once the release is complete.
@vivekr-splunk 9.1.2 released, but still no news here. is there any ETA?
Hello @yaroslav-nakonechnikov this is fixed in 9.1.2 build.
i managed to test it, and yes. it looks like this fixed. but https://github.com/splunk/splunk-operator/issues/1260
Please select the type of request
Bug
Tell us more
Describe the request We see time to time strange behavior, that config files, which were pushed thru default.yml is broken after pod restart.
so, list of keys were duplicated without value.
Here is a configmap:
Expected behavior default.yml is rendering each run same way. without issues.
Splunk setup on K8S EKS 1.27 Splunk Operator 2.3.0 Splunk 9.1.0.2
Reproduction/Testing steps after some unpredicted restart of pod, new pod started with broken config.