Closed jackwillsmith closed 10 months ago
Can you show the logs of the storage plugin container?
kubectl logs kelemetry-1689762474-frontend-755b8f47ff-rl4jl -c storage-plugin
By the way, if you already have an ElasticSearch cluster set up, you are encouraged to configure Kelemetry to use it instead of the single-instance Badger DB.
k -n kelemetry logs -f kelemetry-1689762474-frontend-755b8f47ff-rl4jl -c storage-plugin
time="2023-07-20T05:00:57Z" level=error msg="unknown flag: --trace-server-enable"
and no ElasticSearch cluster set up my k8s cluster.
this pod is running when i move --trace-server-enable
. Why is this incorrect parameter present in the official charts templates file?
What image version are you using for the frontend pod? The latest version has the --trace-server-enable
flag.
kelemetry images version: ghcr.io/kubewharf/kelemetry:0.1.0
, and when i move --trace-server-enable
flag, the jeager is not trace k8s deployment.
Try using 0.2.2 instead. 0.1.0 is an old version.
this work after instead image version, but have other issue
{"level":"info","ts":1689835956.4600453,"caller":"channelz/funcs.go:340","msg":"[core][Server #4 ListenSocket #7] ListenSocket created","system":"grpc","grpc_log":true}
{"level":"info","ts":1689835956.4600942,"caller":"app/server.go:282","msg":"Starting HTTP server","port":16686,"addr":":16686"}
{"level":"info","ts":1689835957.464222,"caller":"channelz/funcs.go:340","msg":"[core][Channel #5 SubChannel #6] Subchannel Connectivity change to IDLE","system":"grpc","grpc_log":true}
{"level":"info","ts":1689835957.4643304,"caller":"grpclog/component.go:71","msg":"[core]pickfirstBalancer: UpdateSubConnState: 0xc00059c078, {IDLE connection error: desc = \"transport: Error while dialing dial tcp :16685: connect: connection refused\"}","system":"grpc","grpc_log":true}
{"level":"info","ts":1689835957.4643457,"caller":"channelz/funcs.go:340","msg":"[core][Channel #5] Channel Connectivity change to IDLE","system":"grpc","grpc_log":true}
jeagerUi cannot get the tracing deployment change.
Are you also using v0.2.2 of the helm chart?
No specific version When using helm install, only the charts that commit c42c0ff010c570a984663c8911568f3fa05e5ee7
are used
Try using the latest version (v0.2.2).
how specific version(v0.2.2) When using helm install?
Set kelemetryImage.tag
in values.yaml to 0.2.2
in the helm chart. Or just install oci://ghcr.io/kubewharf/kelemetry-chart:0.2.2
directly, which already indicates 0.2.2 image by default.
had specific kelemetryImage.tag was v0.2.2, and add --trace-server-enable
.All pod are running ,but get error log
{"level":"info","ts":1689835956.4600453,"caller":"channelz/funcs.go:340","msg":"[core][Server #4 ListenSocket #7] ListenSocket created","system":"grpc","grpc_log":true}
{"level":"info","ts":1689835956.4600942,"caller":"app/server.go:282","msg":"Starting HTTP server","port":16686,"addr":":16686"}
{"level":"info","ts":1689835957.464222,"caller":"channelz/funcs.go:340","msg":"[core][Channel #5 SubChannel #6] Subchannel Connectivity change to IDLE","system":"grpc","grpc_log":true}
{"level":"info","ts":1689835957.4643304,"caller":"grpclog/component.go:71","msg":"[core]pickfirstBalancer: UpdateSubConnState: 0xc00059c078, {IDLE connection error: desc = \"transport: Error while dialing dial tcp :16685: connect: connection refused\"}","system":"grpc","grpc_log":true}
{"level":"info","ts":1689835957.4643457,"caller":"channelz/funcs.go:340","msg":"[core][Channel #5] Channel Connectivity change to IDLE","system":"grpc","grpc_log":true}
jeagerUi cannot get the tracing deployment change.
Can you check the frontend pod logs? Might be a duplciate of #127
frontend logs
{"level":"info","ts":1689835955.4305463,"caller":"grpclog/component.go:71","msg":"[core]Creating new client transport to \"{\\n \\\"Addr\\\": \\\"localhost:17271\\\",\\n \\\"ServerName\\\": \\\"localhost:17271\\\",\\n \\\"Attributes\\\": null,\\n \\\"BalancerAttributes\\\": null,\\n \\\"Type\\\": 0,\\n \\\"Metadata\\\": null\\n}\": connection error: desc = \"transport: Error while dialing dial tcp [::1]:17271: connect: connection refused\"","system":"grpc","grpc_log":true}
{"level":"warn","ts":1689835955.4305613,"caller":"channelz/funcs.go:342","msg":"[core][Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {\n \"Addr\": \"localhost:17271\",\n \"ServerName\": \"localhost:17271\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Type\": 0,\n \"Metadata\": null\n}. Err: connection error: desc = \"transport: Error while dialing dial tcp [::1]:17271: connect: connection refused\"","system":"grpc","grpc_log":true}
but storage-plugin
is running not error
Can you check the frontend pod logs? Might be a duplciate of #127
not similar this
Could you check the logs of the storage-plugin container? k logs deploy/kelemetry-frontend -c storage-plugin
closed as stale due to lack of response
Steps to reproduce
Expected behavior
all pod is running
Actual behavior
all pods are running but frontend
error.log
Kelemetry version
c42c0ff010c570a984663c8911568f3fa05e5ee7
Environment
kubernetes version:
cloud provider:
local vm
Jaeger version:
jaegertracing/jaeger-collector:1.42
use deployment collector of Kelemetrystorage:
custom nfs storage