Closed vistasunil closed 1 year ago
Hi, thanks for raising this :).
Could you share your opentelemetry-operator version? Could it be that you have updated your deployment of opentelemetry-operator, and now are dealing with: https://opentelemetry.io/blog/2023/jaeger-exporter-collector-migration/
If so, the following PR is related to it: https://github.com/kubewarden/helm-charts/pull/317.
@viccuad Thanks for replying on the issue. We are using "0.34.1" version of telemetry operator. Also, we need our policy server pods logs to be in JSON format which are getting changes to OTLP after enabling telemetry sidecar. I think its same as the one mentioned for jaeger.
I've looked into the issue and I found the problem.
In this case the Kubewarden stack is deployed with the metrics support turned on, while the tracing support is not enabled. This a completely reasonable use case.
The metrics support is driven by a configuration value of our our helm chart. This causes the kubewarden-controller to be started with metrics enabled. The controller exposes some metrics via Prometheus too. On top of that, when the metrics support is enabled, the controller will create the PolicyServer Deployment with metrics enabled:
BUT, the controller will also force the logging format of the PolicyServer to be otlp
. This is what causes the enablement of the tracing system inside of the PolicyServer. As a result of that, the logs will not be using the JSON format, plus some information is not going to be printed anymore (it's going to be sent to the collector so that it can be exported via traces).
The bug is that the kubewarden controller conflates the metrics and traces use cases under one single flag: the metrics one.
The kubewarden-controller should stop managing the PolicyServer format in an explicit way. We basically have to remove these lines:
Once this is done, the only way to enable tracing support inside of PolicyServer is by writing a spec file like the following one:
apiVersion: policies.kubewarden.io/v1
kind: PolicyServer
metadata:
name: something-with-traces-enabled
spec:
image: ghcr.io/kubewarden/policy-server:latest
replicas: 1
env:
- name: KUBEWARDEN_LOG_FMT
value: otlp
We need to update our documentation to inform our users about this requirement.
Moreover, we need to change our kubewarden defaults helm chart to ensure the default
PolicyServer it creates has this configuration whenever the tracing support is enabled at the chart level.
We could expand the PolicyServer CRD to have two new dedicated attributes: metrics and logging. The kubewarden controller would then generate the PolicyServer Deployment using the data shipped with the Custom Resource.
I'm always a bit reluctant to add new fields to the CRD. I would prefer to go with the proposed solution and then, based on user feedback, evaluate if we need to introduce dedicated fields.
@flavio Many thanks for looking into the issue and finding the cause for this. The suggested fixes make sense here and we are ok with them. Do you have any ETA to release the fixed version of below code lines, so that we can use them further? https://github.com/kubewarden/kubewarden-controller/blob/4333c9019bcce21667d34d7cf9bf4ddbf789f005/internal/pkg/admission/policy-server-deployment.go#L335-L340
Regards, Sunil Kumar
Opened https://github.com/kubewarden/kubewarden-controller/pull/553 in the controller to not reconcile the env var.
Moreover, we need to change our kubewarden defaults helm chart to ensure the default PolicyServer it creates has this configuration whenever the tracing support is enabled at the chart level.
Note that kubewarden-controller
does have a .Values.telemetry
value, but kubewarden-defaults
has none. I suppose it's fine to add that new one, which will need docs too, and to be used in downstream consumers.
@vistasunil we have just released Kubewarden v1.8.0, with a new feature in the controller: now the controller can reconcile the bits for the metrics and the tracing separately, which should solve this issue :).
Note that this means a backwards-incompatible change to values.yaml
of kubewarden-controller chart, you can read more about it as always in the release blogpost: https://www.kubewarden.io/blog/2023/10/kubewarden-1.8-release/
@viccuad Thanks for the updates. I will deploy the changes and see if this is aligned with our requirement. :)
@viccuad This works like charm. Big thanks for fixing this out. Feel free to close this issue now. :)
Is there an existing issue for this?
Current Behavior
We want to deploy telemetry operator and need telemetry logs to be exported in
json
format. We know that it can be done through helm values file while deploying policyserver using helm chartkubewarden-defaults
. Below is out values file for kubewarden-defaults:And, below is my values file for kubewarden-controller helm chart to enable telemetry:
This sets correcting the policyserver environments as below:
But, the sidecar
otc-container
injected due to opentelemetry collect to policyserver-default deployment changes the environment KUBEWARDEN_LOG_FMT value to otlp and it starts sending all logs in OTLP format. Please see the deployment spec below:We want to keep the log format JSON only while opentelemetry is enabled.
Kindly look into this and let us know, if some solution is available to achive the same.
Thanks & Regards, Sunil Kumar
Expected Behavior
When telemetry enabled, the policyserver-default deployment environment KUBEWARDEN_LOG_FMT value should remain json format as provided through our values file and sends all logs in JSON format.
Steps To Reproduce
All details shared in description.
Environment
Anything else?
No response