Open havard024 opened 2 years ago
jaeger operator and jaeger logs are interpreted as errors in our centralised logging system.
Below are four different log lines from the jaeger operator, I've also attached a screenshot from our centralised logging system showing the same
I0924 08:19:39.985099 1 leaderelection.go:248] attempting to acquire leader lease observability/31e04290.jaegertracing.io...
2022/09/24 08:19:39 <nil>
1.664007579984811e+09 INFO Starting server {"kind": "health probe", "addr": "[::]:8081"}
1.6640075835232968e+09 INFO Stopping and waiting for non leader election runnables
Below are three different log lines from the jaeger deployment, I've also attached a screenshot from our centralized logging system showing the same
badger 2022/09/24 08:11:41 INFO: Set nextTxnTs to 0
2022/09/24 08:11:41 maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
{"level":"warn","ts":1664009833.0241213,"caller":"channelz/funcs.go:342","msg":"[core][Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {\n \"Addr\": \":16685\",\n \"ServerName\": \"localhost:16685\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Type\": 0,\n \"Metadata\": null\n}. Err: connection error: desc = \"transport: Error while dialing dial tcp :16685: connect: connection refused\"","system":"grpc","grpc_log":true}
I also see these lines from the all-in-one image though I was unable to figure out which container they come from
all-in-one
Level 6 [ ]: NumTables: 83. Size: 467 MiB of 467 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 8.0 MiB
{"msg":"Shutdown complete","caller":"flags/service.go:164","ts":1664007095.592402,"level":"info"}
Our setup:
Here's another issue which looks pretty similar https://github.com/jaegertracing/jaeger-operator/issues/1817
Install jaeger operator 1.37 with the supplied Jaeger manifest Check the jaeger operator logs and the jaeger deployment logs
Seeing the same format of logs on every line Being able to change the log level globally
jaeger operator logs I0924 08:19:39.271244 1 request.go:601] Waited for 1.021218758s due to client-side throttling, not priority and fairness, request: GET:https://172.20.0.1:443/apis/snapshot.storage.k8s.io/v1?timeout=32s 1.6640075799272454e+09 INFO controller-runtime.metrics Metrics server is starting to listen {"addr": "0.0.0.0:8383"} 1.6640075799818969e+09 INFO controller-runtime.builder Registering a mutating webhook {"GVK": "jaegertracing.io/v1, Kind=Jaeger", "path": "/mutate-jaegertracing-io-v1-jaeger"} 1.6640075799822905e+09 INFO controller-runtime.webhook Registering webhook {"path": "/mutate-jaegertracing-io-v1-jaeger"} 1.6640075799825728e+09 INFO controller-runtime.builder Registering a validating webhook {"GVK": "jaegertracing.io/v1, Kind=Jaeger", "path": "/validate-jaegertracing-io-v1-jaeger"} 1.6640075799827461e+09 INFO controller-runtime.webhook Registering webhook {"path": "/validate-jaegertracing-io-v1-jaeger"} 1.6640075799829955e+09 INFO controller-runtime.webhook Registering webhook {"path": "/mutate-v1-deployment"} 2022/09/24 08:19:39 <nil> 1.664007579983556e+09 INFO controller-runtime.webhook.webhooks Starting webhook server 1.6640075799839525e+09 INFO controller-runtime.certwatcher Updated current TLS certificate 1.6640075799842384e+09 INFO controller-runtime.webhook Serving webhook server {"host": "", "port": 9443} 1.6640075799844604e+09 INFO controller-runtime.certwatcher Starting certificate watcher 1.664007579984675e+09 INFO Starting server {"path": "/metrics", "kind": "metrics", "addr": "[::]:8383"} 1.664007579984811e+09 INFO Starting server {"kind": "health probe", "addr": "[::]:8081"} I0924 08:19:39.985099 1 leaderelection.go:248] attempting to acquire leader lease observability/31e04290.jaegertracing.io... I0924 08:22:13.015968 1 leaderelection.go:258] successfully acquired lease observability/31e04290.jaegertracing.io 1.6640077330196195e+09 DEBUG events Normal {"object": {"kind":"Lease","namespace":"observability","name":"31e04290.jaegertracing.io","uid":"f6b72918-d5c6-4e3d-ad58-2700d2c80250","apiVersion":"coordination.k8s.io/v1","resourceVersion":"16741521"}, "reason": "LeaderElection", "message": "jaeger-operator-6b6587468-jxzkp_3d3de6ed-fa93-4718-8878-d53e9a72f1b4 became leader"} 1.6640077330233665e+09 INFO Starting EventSource {"controller": "jaeger", "controllerGroup": "jaegertracing.io", "controllerKind": "Jaeger", "source": "kind source: *v1.Jaeger"} 1.6640077330234199e+09 INFO Starting EventSource {"controller": "jaeger", "controllerGroup": "jaegertracing.io", "controllerKind": "Jaeger", "source": "kind source: *v1.Jaeger"} 1.6640077330234263e+09 INFO Starting Controller {"controller": "jaeger", "controllerGroup": "jaegertracing.io", "controllerKind": "Jaeger"} 1.664007733023863e+09 INFO Starting EventSource {"controller": "namespace", "controllerGroup": "", "controllerKind": "Namespace", "source": "kind source: *v1.Namespace"} 1.664007733023894e+09 INFO Starting Controller {"controller": "namespace", "controllerGroup": "", "controllerKind": "Namespace"} 1.6640077331251454e+09 INFO Starting workers {"controller": "namespace", "controllerGroup": "", "controllerKind": "Namespace", "worker count": 1} 1.6640077331252797e+09 INFO Starting workers {"controller": "jaeger", "controllerGroup": "jaegertracing.io", "controllerKind": "Jaeger", "worker count": 1} 1.6640077333197255e+09 DEBUG controller-runtime.webhook.webhooks received request {"webhook": "/mutate-v1-deployment", "UID": "9a2c2417-61f0-4af4-a997-a58dcdb78719", "kind": "apps/v1, Kind=Deployment", "resource": {"group":"apps","version":"v1","resource":"deployments"}} 1.6640077333219082e+09 DEBUG controller-runtime.webhook.webhooks wrote response {"webhook": "/mutate-v1-deployment", "code": 200, "reason": "is jaeger deployment, we do not touch it", "UID": "9a2c2417-61f0-4af4-a997-a58dcdb78719", "allowed": true} 1.6640077334424644e+09 INFO KubeAPIWarningLogger autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler 1.6640078141597042e+09 DEBUG controller-runtime.webhook.webhooks received request {"webhook": "/mutate-v1-deployment", "UID": "a5521e8c-8cef-49a5-978d-c567bae10bf9", "kind": "apps/v1, Kind=Deployment", "resource": {"group":"apps","version":"v1","resource":"deployments"}} 1.6640078141605399e+09 DEBUG controller-runtime.webhook.webhooks wrote response {"webhook": "/mutate-v1-deployment", "code": 200, "reason": "no action needed", "UID": "a5521e8c-8cef-49a5-978d-c567bae10bf9", "allowed": true} 1.6640078141816647e+09 DEBUG controller-runtime.webhook.webhooks received request {"webhook": "/mutate-v1-deployment", "UID": "5ef1b29d-6b23-4f21-8ef8-ed829d06f4ed", "kind": "apps/v1, Kind=Deployment", "resource": {"group":"apps","version":"v1","resource":"deployments"}} 1.6640078141825016e+09 DEBUG controller-runtime.webhook.webhooks wrote response {"webhook": "/mutate-v1-deployment", "code": 200, "reason": "no action needed", "UID": "5ef1b29d-6b23-4f21-8ef8-ed829d06f4ed", "allowed": true} 1.664007814204585e+09 DEBUG controller-runtime.webhook.webhooks received request {"webhook": "/mutate-v1-deployment", "UID": "ea56d1c5-da93-428b-823e-e5fbe736752d", "kind": "apps/v1, Kind=Deployment", "resource": {"group":"apps","version":"v1","resource":"deployments"}}
jaeger
jaeger operator
No response
v1.37.0
linux
kubernetes
We have deployed the following manifests https://github.com/jaegertracing/jaeger-operator/releases/download/v1.37.0/jaeger-operator.yaml Below is our Jaeger manifest apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simplest namespace: istio-system spec: allInOne: options: log-level: warn query: base-path: /jaeger collector: zipkin: host-port: 9411 ingress: enabled: false storage: type: badger options: memory: max-traces: "50000" badger: ephemeral: false directory-key: "/badger/key" directory-value: "/badger/data"
I checked and are log messages from the controllerruntime package. Probably we're not setting properly one logger or something. I'll try to do a deeper research :)
controllerruntime
What happened?
jaeger operator and jaeger logs are interpreted as errors in our centralised logging system.
Below are four different log lines from the jaeger operator, I've also attached a screenshot from our centralised logging system showing the same
Below are three different log lines from the jaeger deployment, I've also attached a screenshot from our centralized logging system showing the same
I also see these lines from the
all-in-one
image though I was unable to figure out which container they come fromOur setup:
Here's another issue which looks pretty similar https://github.com/jaegertracing/jaeger-operator/issues/1817
Steps to reproduce
Install jaeger operator 1.37 with the supplied Jaeger manifest Check the jaeger operator logs and the jaeger deployment logs
Expected behavior
Seeing the same format of logs on every line Being able to change the log level globally
Relevant log output
Screenshot
jaeger
jaeger operator
Additional context
No response
Jaeger backend version
v1.37.0
SDK
No response
Pipeline
No response
Stogage backend
No response
Operating system
linux
Deployment model
kubernetes
Deployment configs