Closed kevincantu closed 3 months ago
Thanks, by the way, to @pjanotti and @flands who helped me in the Gitter channel, and to this Contour ticket that pointed me at yages!
My spidey sense tells me this cmux issue may be related... 🤷♀️
Hi folks. Just a quick check to see if there is a timeline with this fix since we are running into this as well.
Hey @kevincantu
As I'm not a Countour expert, I tested against 'vanilla' Envoy and I got it working:
I'm wondering if there's something Contour specific or I'm missing something. Let me know ;)
Oh that's encouraging: perhaps something in Envoy 1.16 fixes this? (The version of Contour I last tested with was using an earlier Envoy.)
Hey @kevincantu Any update on this? ;)
Any update on this, How can i disable the tls ?
I'm no longer actively working on the same system which used this, so I haven't spun up a cluster to try any of this out again lately.
What I'd try, though, is setting up something like my example above, with a newer version of Contour (and its corresponding newer version of Envoy), and see whether the workaround I showed is still necessary!
Specifically:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-collector-conf
...
data:
otel-collector-config: |
receivers:
otlp:
protocols:
grpc:
# remove me?
tls_settings:
cert_file: /tls/cert.pem
key_file: /tls/key.pem
...
---
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: otel-collector
...
spec:
...
tcpproxy:
services:
- name: otel-collector
port: 55680
# tls: HTTP/1 TLS
# h2: HTTP/2 TLS
# h2c: HTTP/2 cleartext
protocol: h2 # try making me "h2c"?
How to enable the mTLS for receiver?
Closing as inactive, please reopen if this is still being worked on.
I manage to run into this, seems like despite following https://projectcontour.io/docs/main/guides/grpc/ the envoy instance is sending HTTP/1 request to the otel collector instance.
I manage to run into this, seems like despite following https://projectcontour.io/docs/main/guides/grpc/ the envoy instance is sending HTTP/1 request to the otel collector instance.
I got it working when I switched from using the ingress object to using the contour specific httpproxy object, I'll try to figure out if there's a difference between the configuration they generate.
I've just gotten started setting up otel-collector for some Kubernetes clusters where we use Envoy (configured via Contour) for routing, and discovered a detail that gave me fits, so I think it's worth laying it all out here. I suspect it may be a gRPC server issue in the collector: some gnarly interaction with Envoy, perhaps?
Expected
What I hoped was that otel-collector could be set up much like this demo with YAGES (a gRPC echo server), where:
I set this up using a Contour HTTPProxy in TCP proxying mode, which relies on SNI to route traffic by domain name:
You can exercise that yages app (to send a ping and receive a pong) with the following grpcurl command:
I expected routing just like that to work for otel-collector:
opentelemetry-exporter-otlp
sends gRPC TLS traffic,Actual
But that didn't work.
Instead, when configuring Envoy (via Contour) like that, I saw TCP events in the Envoy access logs like so, but no success:
My sample app (sending traffic to
otel-grpc.staging.test:443
) only receivedStatusCode.UNAVAILABLE
error responses! (I extended this part of the opentelemetry-exporter-otlp Python library to log those codes.)Workaround
To make things work, I had to configure Envoy to pass HTTP/2 TLS traffic to the upstream.
Like so:
opentelemetry-exporter-otlp
sends gRPC TLS traffic,That is, in addition to the TLS cert setup for otel-collector, this Contour HTTPProxy config change:
Bug?
Specifically, I found that when routing OTLP (gRPC) traffic wrapped in HTTP/2 TLS:
I think that means that there's something we could do here to make otel-collector's gRPC server play nicely with Envoy!