jaegertracing / jaeger

CNCF Jaeger, a Distributed Tracing Platform
https://www.jaegertracing.io/
Apache License 2.0
20.41k stars 2.44k forks source link

grpc: Server.Serve failed to create ServerTransport #2172

Closed prana24 closed 2 years ago

prana24 commented 4 years ago

Requirement - what kind of business use case are you trying to solve?

Problem - what in Jaeger blocks you from solving the requirement?

Proposal - what do you suggest to solve the problem or improve the existing situation?

Any open questions to address

prana24 commented 4 years ago

We have jaegertracing 1.14 deployed on azure kubernetes cluster . Collector service is defined as loadbalancer with annotation , external-dns.alpha.kubernetes.io/hostname ,
our agent connects the collector using the service dns name , I see warning logs in collector log , and making me worried is there anything wrong. I see traces and spans coming in to jaeger back end , i.e. Elastic Search.

Below is the log which i keep observing.

WARNING: 2020/04/07 18:40:43 grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: http2Server.HandleStreams failed to receive the preface from client: read tcp 192.168.130.18:14250->10.128.36.40:50417: read: connection reset by peer" Any idea/suggestion will be highly appreciated.

pavolloffay commented 4 years ago

@prana24 could you please update to Jaeger 1.17.1?

dntosas commented 4 years ago

@pavolloffay getting this kind of warning even on 1.17.1 ^^

a31amit commented 4 years ago

I have noticed this in 1.18 version with storage backend to kafka.

{"level":"warn","ts":1591511301.9598265,"caller":"grpc@v1.27.1/server.go:669","msg":"grpc: Server.Serve failed to create ServerTransport: connection error: desc = \"transport: http2Server.HandleStreams failed to receive the preface from client: EOF\"","system":"grpc","grpc_log":true}

jpkrohling commented 4 years ago

Are you seeing this message periodically? Are you seeing traces "without root span", possibly created at around the time such message occurs?

Without extra context, I would just not worry about those messages, as they are probably just saying that a networking failure happened between the agent and the collector. IIRC, gRPC will just retry, so, no data should have been lost.

wiardvanrij commented 3 years ago

I forgot to update this issue, but I could pin-point it to 'cloud provider' healthchecks on k8s nodes. As this relates to grpc, I've made an issue here: https://github.com/grpc/grpc-go/issues/4234 - When this gets resolved, the package could be updated in Jaeger.

wiardvanrij commented 3 years ago

This is now fixed in grpc-go, would be nice to upgrade this package when there is a new release

jpkrohling commented 3 years ago

Unfortunately, I think updating grpc-go is a bit problematic right now. @joe-elliott and @pavolloffay have tried in the past, with no luck.

wiardvanrij commented 3 years ago

Unfortunately, I think updating grpc-go is a bit problematic right now. @joe-elliott and @pavolloffay have tried in the past, with no luck.

Ai, that does not boost my confidence xD - Do you perhaps have any related issues/PR's so I can have a look what was blocking? At the moment this issue is spamming our logs insanely. So I have motivation to get it fixed (:

jpkrohling commented 3 years ago

I think the problem is related to gogo, which doesn't seem to work with any recent version of gRPC. And moving away from gogo would cause performance regressions. We have to eventually find a solution to this, as we are one CVE away from being forced to make a decision in a rush...

Perhaps @pavolloffay and @joe-elliott can add comments based on their recollection of the matter?

yurishkuro commented 3 years ago

Maybe we should look into otel collector's pdata, which iirc avoids default proto perf issues.

joe-elliott commented 3 years ago

Here is my attempt at upgrading:

https://github.com/jaegertracing/jaeger/pull/2857

The last few comments show the final hurdles I could not get past.

wiardvanrij commented 3 years ago

Perhaps this could be useful: https://vitess.io/blog/2021-06-03-a-new-protobuf-generator-for-go/

yurishkuro commented 3 years ago

^ this is a great find, I think we should try it out. At minimum, it will unblock us to upgrade proto & grpc.

wiardvanrij commented 3 years ago

Could this be re-opened as even though we are upgrading to 1.38, the fix for this issue itself is not yet included in 1.38, as this has been made 6 days ago :) It would require an update to 1.39 or whatever version that includes grpc/grpc-go#4234

Thanks a lot for this upgrade move though. Awesome <3

rodoufu commented 3 years ago

It may be related to https://github.com/grpc/grpc-go/issues/875

stale[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

stale[bot] commented 2 years ago

This issue has been automatically closed due to inactivity.

nicolastakashi commented 2 years ago

Hey Peeps! This issue is still happening. I've had the same issue on the same scenario reported by @prana24

diegoluisi commented 1 year ago

{"level":"info","ts":1685345493.0991235,"caller":"grpc@v1.54.0/server.go:935","msg":"[core][Server #1] grpc: Server.Serve failed to create ServerTransport: connection error: desc = \"transport: http2Server.HandleStreams received bogus greeting from client: \\"POST /api/v2/spans HTTP/\\"\"","system":"grpc","grpc_log":true}

peacedevil13 commented 8 months ago

{"level":"info","ts":1709008560.3461268,"caller":"grpc@v1.61.0/server.go:994","msg":"[core][Server #5] grpc: Server.Serve failed to create ServerTransport: connection error: desc = \"transport: http2Server.HandleStreams received bogus greeting from client: \\"POST /v1/traces HTTP/1.1\\"\"","system":"grpc","grpc_log":true}

yurishkuro commented 8 months ago

sending HTTP to gRPC port