open-telemetry / opentelemetry-collector

OpenTelemetry Collector
https://opentelemetry.io
Apache License 2.0
4.36k stars 1.44k forks source link

Service defined is triggered, but the zipkin exporter is refused. #4985

Closed Xianyong closed 2 years ago

Xianyong commented 2 years ago

Resource labels: -> service.name: STRING(bona_service) InstrumentationLibrarySpans #0 InstrumentationLibrary main Span #0 Trace ID : 01aaaa049248391e2b532e1c0e50eafa Parent ID : ID : 34f3801230720fba Name : foo_bonaaaa Kind : SPAN_KIND_INTERNAL Start time : 2022-03-12 15:28:02.725104384 +0000 UTC End time : 2022-03-12 15:28:02.725247488 +0000 UTC Status code : STATUS_CODE_UNSET Status message :

2022-03-12T15:28:02.826Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "zipkin", "error": "failed to push trace data via Zipkin exporter: Post \"http://localhost:9411/api/v2/spans\": dial tcp 127.0.0.1:9411: connect: connection refused", "interval": "5.52330144s"} 2022-03-12T15:28:19.158Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "zipkin", "error": "failed to push trace data via Zipkin exporter: Post \"http://localhost:9411/api/v2/spans\": dial tcp 127.0.0.1:9411: connect: connection refused", "interval": "13.101300599s"} 2022-03-12T15:28:32.260Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "zipkin", "error": "failed to push trace data via Zipkin exporter: Post \"http://localhost:9411/api/v2/spans\": dial tcp 127.0.0.1:9411: connect: connection refused", "interval": "15.823926909s"} 2022-03-12T15:28:48.087Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "zipkin", "error": "failed to push trace data via Zipkin exporter: Post \"http://localhost:9411/api/v2/spans\": dial tcp 127.0.0.1:9411: connect: connection refused", "interval": "23.404886645s"} 2022-03-12T15:29:11.496Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "zipkin", "error": "failed to push trace data via Zipkin exporter: Post \"http://localhost:9411/api/v2/spans\": dial tcp 127.0.0.1:9411: connect: connection refused", "interval": "35.604692186s"} 2022-03-12T15:29:47.103Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "zipkin", "error": "failed to push trace data via Zipkin exporter: Post \"http://localhost:9411/api/v2/spans\": dial tcp 127.0.0.1:9411: connect: connection refused", "interval": "16.969110576s"} 2022-03-12T15:30:04.075Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "zipkin", "error": "failed to push trace data via Zipkin exporter: Post \"http://localhost:9411/api/v2/spans\": dial tcp 127.0.0.1:9411: connect: connection refused", "interval": "19.695577642s"} 2022-03-12T15:30:23.772Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "zipkin", "error": "failed to push trace data via Zipkin exporter: Post \"http://localhost:9411/api/v2/spans\": dial tcp 127.0.0.1:9411: connect: connection refused", "interval": "17.909085567s"} 2022-03-12T15:30:41.685Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "zipkin", "error": "failed to push trace data via Zipkin exporter: Post \"http://localhost:9411/api/v2/spans\": dial tcp 127.0.0.1:9411: connect: connection refused", "interval": "24.027355817s"}

Xianyong commented 2 years ago

I tried one more 5 hours. It always show up: dial tcp 127.0.0.1:9411: connect: connection refused

BTW, below is the content in my otlp_config.yaml file.

extensions: memory_ballast: size_mib: 512 zpages: endpoint: 0.0.0.0:55679

receivers: otlp: protocols: grpc: http:

processors: batch: memory_limiter:

75% of maximum memory up to 4G

limit_mib: 1536
# 25% of limit up to 2G
spike_limit_mib: 512
check_interval: 5s

exporters: logging: logLevel: debug zipkin: endpoint: "http://localhost:9411/api/v2/spans"

service: pipelines: traces: receivers: [otlp] processors: [memory_limiter, batch] exporters: [logging, zipkin] metrics: receivers: [otlp] processors: [memory_limiter, batch] exporters: [logging]

extensions: [memory_ballast, zpages]

bogdandrutu commented 2 years ago

Hi @Xianyong, thanks for trying this product.

Is your zipkin instance http://localhost:9411 accessible from the docker/container/process where collector runs? Looks like the collector cannot establish a connection to the localhost:9411 address. Can you check that please?

Xianyong commented 2 years ago

@bogdandrutu Hi, thanks so much for your reply.

I have two docker containers. One is the Zipkin docker instance, and the other is the opentelemetry collector docker instance. They are pulled from the docker hub, and run separately. Besides, I made tests that using an opentelemtry client (python) to export data to the Zipkin instance directly, and then I could successfully find the telemetry by the Zipkin UI.

When I changed the opentelemtry client to send a trace data to the opentelemetry collector docker instance, which exported the trace data to the Zipkin instance. However, I got the errors described as above.

---- Here are the two instances --- sudo docker run -d --restart always -p 9411:9411 --name zipkin openzipkin/zipkin

sudo docker run --rm -it -p 4317:4317 -p 4318:4318 -p 13133:13133 -p 14250:14250 -p 55678-55679:55678-55679 -p 8888:8888 -v "${PWD}/local/otel-config.yaml":/otel-local-config.yaml --name otelcol otel/opentelemetry-collector --config otel-local-config.yaml

Xianyong commented 2 years ago

@bogdandrutu Hi, thanks so much for your reply.

I have two docker containers. One is the Zipkin docker instance, and the other is the opentelemetry collector docker instance. They are pulled from the docker hub, and run separately. Besides, I made tests that using an opentelemtry client (python) to export data to the Zipkin instance directly, and then I could successfully find the telemetry by the Zipkin UI.

When I changed the opentelemtry client to send a trace data to the opentelemetry collector docker instance, which exported the trace data to the Zipkin instance. However, I got the errors described as above.

---- Here are the two instances --- sudo docker run -d --restart always -p 9411:9411 --name zipkin openzipkin/zipkin

sudo docker run --rm -it -p 4317:4317 -p 4318:4318 -p 13133:13133 -p 14250:14250 -p 55678-55679:55678-55679 -p 8888:8888 -v "${PWD}/local/otel-config.yaml":/otel-local-config.yaml --name otelcol otel/opentelemetry-collector --config otel-local-config.yaml

Xianyong commented 2 years ago

This issue was closed and resolved by checking the foreign IP address of docker instances. Thanks so much again, @bogdandrutu .