envoyproxy / envoy

Cloud-native high-performance edge/middle/service proxy
https://www.envoyproxy.io
Apache License 2.0
24.89k stars 4.78k forks source link

upstream_bind_config does not work with UDP. (Connected to #15516) #15590

Open davidkornel opened 3 years ago

davidkornel commented 3 years ago

This issue was opened because @mattklein123 asked to do it in #15516 Using upstream_bind_config field with UDP doesn't seem to work.

Repro steps + Config: Tested the following config with a socat UDP sender sending datagrams to 127.0.0.1:1234, and socat receiver listening at 127.0.0.1:1235 My full static config is: upstream_bind_config_test.txt

Expectation: The expectation was to bind the outgoing connection's source to 127.0.0.2:9999, but it did not happen. socat recv accepted connection from a randomly chosen port at 127.0.0.1 by envoy. 2021/03/17 14:10:30 socat[28284] N accepting UDP connection from AF=2 127.0.0.1:51755

Note: The same upstream_bind_config configuration works well with HTTP, as you would expect.

@mattklein123 as you mentioned that

The problem with bind config with a single port is it breaks session tracking

May I ask for a deeper explanation on this topic?

mattklein123 commented 3 years ago

May I ask for a deeper explanation on this topic?

The issue is that udp_proxy requires a different source port for every "session" (downstream IP/port) in order to correctly associate upstream datagrams with the right downstream client. I'm not sure what bind config actually means in this case. Would you want to just use the IP part of the bind config and not the port?

davidkornel commented 3 years ago

@mattklein123 I see the problem now thanks.

Would you want to just use the IP part of the bind config and not the port?

The opposite. We want to use the port part of the bind config mostly. Our use case so you can see our main goal and problems: The upstream is a proprietary UDP/RTP proxy. The RTP specification recommends even port numbers for RTP, and the use of the next odd number for the associated RTCP session. And that is why we want to control the upstream socket's port.

In our case for every new udp session a simple control plane creates a new listener->cluster->endpoint pipeline. So there won't be more than one session on a cluster. And every single cluster's bind config would bind to a not-in-use port. That way the "multiple UDP 'sessions' with the same source ip:port" problem won't matter. I know that overlooking a problem like this just because "it won't matter" is not the best idea, but in specific cases like ours it could work. Just an idea: Maybe you should highlight this in the docs and tell envoy users to watch out for it when configuring envoy.

WDYT? Could this work anytime?

mattklein123 commented 3 years ago

I see, OK, thanks. I think it's probably fine to support bind config with the clearly documented understanding that it can only work with a single session. I will leave this marked as help wanted. I'm not sure when I will be able to get to this but can help someone work through it if they are interested.