Closed thangnn4688 closed 2 years ago
cc @ggreenway @soulxu @rgs1
Note that this won't work unless the load balancer in front of Envoy is speaking ProxyProtocol
. The way the ProxyProtocol filter works – and the spec itself mandates this too – is that if it fails to parse the header at the start of the connection, it'll give up and refuse it.
So if things work when you remove the ProxyProtocol
listener filter it probably is because your load balancer indeed is not sending the ProxyProtocol
header.
FWIW, we deal with a similar scenario using a variation of what was proposed in https://github.com/envoyproxy/envoy/pull/17286.
Note that this won't work unless the load balancer in front of Envoy is speaking
ProxyProtocol
. The way the ProxyProtocol filter works – and the spec itself mandates this too – is that if it fails to parse the header at the start of the connection, it'll give up and refuse it.So if things work when you remove the
ProxyProtocol
listener filter it probably is because your load balancer indeed is not sending theProxyProtocol
header.
My Envoy Proxy does not have any LBs in front of it for this case. I use Envoy as UDP Proxy too and the endpoints can get the remote/client IP address normally from the UDP packets are forwarded by Envoy. My UDP Proxy configuration as below:
========
Why my Envoy Proxy can detect the remote/client IP address in UDP packet and forwards it to endpoints, but can not do the same with TPC packets ? (both TPC and UDP listeners are configured together in file envoy.yaml)
When you enable original src filter, it will set IP_TRANSPARENT
option on the upstream socket. But IP_TRANSPARENT
required CAP_NET_ADMIN
capability. so you need to ensure your envoy process has CAP_NET_ADMIN
capability
When you enable original src filter, it will set
IP_TRANSPARENT
option on the upstream socket. ButIP_TRANSPARENT
requiredCAP_NET_ADMIN
capability. so you need to ensure your envoy process hasCAP_NET_ADMIN
capability
I have tried to set CAP_NET_ADMIN for Envoy processes as below:
root@test-envoy:/etc/envoy# getcap /usr/local/bin/func-e
/usr/local/bin/func-e = cap_net_admin+ep
root@test-envoy:/etc/envoy#
root@test-envoy:/etc/envoy# getcap /root/.func-e/versions/1.21.1/bin/envoy
/root/.func-e/versions/1.21.1/bin/envoy = cap_net_admin+ep
root@test-envoy:/etc/envoy#
root@test-envoy:/etc/envoy# ./status.sh
root 91885 1 0 10:30 pts/0 00:00:00 func-e run -c /etc/envoy/envoy.yaml
root 91890 91885 0 10:30 pts/0 00:00:00 /root/.func-e/versions/1.21.1/bin/envoy -c /etc/envoy/envoy.yaml --admin-address-path /root/.func-e/runs/1650079811770347868/admin-address.txt
tcp 0 0 0.0.0.0:9901 0.0.0.0: LISTEN 91890/envoy
tcp 0 0 0.0.0.0:528 0.0.0.0: LISTEN 91890/envoy
tcp 0 0 0.0.0.0:528 0.0.0.0: LISTEN 91890/envoy
tcp 0 0 0.0.0.0:528 0.0.0.0: LISTEN 91890/envoy
tcp 0 0 0.0.0.0:528 0.0.0.0: LISTEN 91890/envoy
udp 0 0 0.0.0.0:528 0.0.0.0: 91890/envoy
udp 0 0 0.0.0.0:528 0.0.0.0: 91890/envoy
udp 0 0 0.0.0.0:528 0.0.0.0: 91890/envoy
udp 0 0 0.0.0.0:528 0.0.0.0:* 91890/envoy
But my Splunk server still receives only UDP packets from my Envoy proxy server as before i do command "setcap CAP_NET_ADMIN+ep" for func-e and envoy bin files. It means TCP packets are droped/filtered by Envoy server by "listener_filters" configuration, because Envoy server still receives TCP packets (logs data) are sent from servers of other systems.
@thangnn4688 do you have log of envoy? Then we can see why the connection was rejected. You can run the envoy with parameter ‘-l trace’, then we can get the log.
etcap /root/.func-e/versions/1.21.1/bin/envoy
Also note, those listener filter doesn't have any effect on UDP. So the proxy filter can reject your request due to no proxy protocol header, and no CAP_NET_ADMIN
can lead your upstream request failed. That is why your TCP failed, but UDP works.
@thangnn4688 do you have log of envoy? Then we can see why the connection was rejected. You can run the envoy with parameter ‘-l trace’, then we can get the log.
I just run Envoy process (/usr/local/sbin/envoy = cap_net_admin+ep) with parameter ‘-l trace’ then i can see below logs when i send TCP packet to Envoy server:
[2022-04-18 10:05:53.129][23332][debug][main] [external/envoy/source/server/server.cc:209] flushing stats [2022-04-18 10:05:58.129][23332][debug][main] [external/envoy/source/server/server.cc:209] flushing stats [2022-04-18 10:06:00.726][23349][debug][filter] [external/envoy/source/extensions/filters/listener/proxy_protocol/proxy_protocol.cc:69] proxy_protocol: New connection accepted [2022-04-18 10:06:00.726][23349][debug][filter] [external/envoy/source/extensions/filters/listener/proxy_protocol/proxy_protocol.cc:435] failed to read proxy protocol (no bytes read) [2022-04-18 10:06:00.726][23349][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:252] item added to deferred deletion list (size=1) [2022-04-18 10:06:00.726][23349][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:115] clearing deferred deletion list (size=1) [2022-04-18 10:06:03.129][23332][debug][main] [external/envoy/source/server/server.cc:209] flushing stats [2022-04-18 10:06:08.130][23332][debug][main] [external/envoy/source/server/server.cc:209] flushing stats
When i send UDP packet to Envoy server, the logs are shown as below (and have difference with TCP packet):
[2022-04-18 10:06:23.131][23332][debug][main] [external/envoy/source/server/server.cc:209] flushing stats [2022-04-18 10:06:23.913][23359][trace][udp] [external/envoy/source/common/network/udp_listener_impl.cc:61] Listener at 0.0.0.0:528 :socket event: 3 [2022-04-18 10:06:23.913][23359][trace][udp] [external/envoy/source/common/network/udp_listener_impl.cc:73] Listener at 0.0.0.0:528 :handleReadCallback [2022-04-18 10:06:23.913][23359][trace][misc] [external/envoy/source/common/network/utility.cc:648] starting recvmmsg with packets=16 max=9000 [2022-04-18 10:06:23.913][23359][trace][misc] [external/envoy/source/common/network/utility.cc:655] recvmmsg read 1 packets [2022-04-18 10:06:23.913][23359][debug][misc] [external/envoy/source/common/network/utility.cc:665] Receive a packet with 11 bytes from 172.24.206.84:39807 [2022-04-18 10:06:23.914][23359][debug][filter] [external/envoy/source/extensions/filters/udp/udp_proxy/udp_proxy_filter.cc:172] creating new session: downstream=172.24.206.84:39807 local=172.24.206.85:528 upstream=172.24.206.56:528 [2022-04-18 10:06:23.914][23359][debug][filter] [external/envoy/source/extensions/filters/udp/udp_proxy/udp_proxy_filter.cc:188] The original src is enabled for address 172.24.206.84:39807. [2022-04-18 10:06:23.914][23359][trace][filter] [external/envoy/source/extensions/filters/udp/udp_proxy/udp_proxy_filter.cc:236] writing 11 byte datagram upstream: downstream=172.24.206.84:39807 local=172.24.206.85:528 upstream=172.24.206.56:528 [2022-04-18 10:06:23.914][23359][trace][misc] [external/envoy/source/common/network/utility.cc:549] sendmsg bytes 11 [2022-04-18 10:06:23.914][23359][trace][misc] [external/envoy/source/common/network/utility.cc:648] starting recvmmsg with packets=16 max=9000 [2022-04-18 10:06:23.914][23359][trace][udp] [external/envoy/source/common/network/udp_listener_impl.cc:100] Listener at 0.0.0.0:528 :handleWriteCallback [2022-04-18 10:06:28.132][23332][debug][main] [external/envoy/source/server/server.cc:209] flushing stats [2022-04-18 10:06:33.133][23332][debug][main] [external/envoy/source/server/server.cc:209] flushing stats
I think the problem with TCP packet is began with the log " failed to read proxy protocol (no bytes read)", please explain it more detailed for me to able to understand the reason, thanks very much for your help!
I think the problem with TCP packet is began with the log " failed to read proxy protocol (no bytes read)", please explain it more detailed for me to able to understand the reason, thanks very much for your help!
This is due to you enable the proxy filter
, but your request doesn't include the proxy protocol header
, you can remove the proxy filter.
But if i remove listener_filters configuration of TCP proxy, the TCP proxy (a part of Envoy server) can not get remote/client IP address to insert it into log content (TCP packet) that is forwarded to the endpoints. Do you have any ideas to solve my case if don't use listener_filters for TCP proxy? Thanks very much!
On Mon, Apr 18, 2022, 18:55 Alex Xu @.***> wrote:
I think the problem with TCP packet is began with the log " failed to read proxy protocol (no bytes read)", please explain it more detailed for me to able to understand the reason, thanks very much for your help!
This is due to you enable the proxy filter, but your request doesn't include the proxy protocol header, you can remove the proxy filter.
— Reply to this email directly, view it on GitHub https://github.com/envoyproxy/envoy/issues/20776#issuecomment-1101342473, or unsubscribe https://github.com/notifications/unsubscribe-auth/APDAP3EMYED63L5K4XVJZF3VFVETPANCNFSM5TFI2YSQ . You are receiving this because you were mentioned.Message ID: @.***>
- name: envoy.filters.listener.proxy_protocol typed_config: '@type': type.googleapis.com/envoy.extensions.filters.listener.proxy_protocol.v3.ProxyProtocol
not remove TCP proxy
filter, remove the proxy filter
under listener_filters
name: envoy.filters.listener.proxy_protocol
typed_config:
'@type': type.googleapis.com/envoy.extensions.filters.listener.proxy_protocol.v3.ProxyProtocol
I just remove extension proxy_protocol and use only extention original_src, the result is TCP proxy still rejects/filters all TCP packets are sent to Envoy server. Only when i remove listener_filters block, the TCP packets are able to be forwarded to the endpoints by TCP proxy. This case is so hard!
On Mon, Apr 18, 2022, 21:37 Alex Xu @.***> wrote:
- name: envoy.filters.listener.proxy_protocol typed_config: @.*** https://github.com/type': type.googleapis.com/envoy.extensions.filters.listener.proxy_protocol.v3.ProxyProtocol
not remove TCP proxy filter, remove the proxy filter under listener_filters
name: envoy.filters.listener.proxy_protocol typed_config: @.***': type.googleapis.com/envoy.extensions.filters.listener.proxy_protocol.v3.ProxyProtocol
— Reply to this email directly, view it on GitHub https://github.com/envoyproxy/envoy/issues/20776#issuecomment-1101459082, or unsubscribe https://github.com/notifications/unsubscribe-auth/APDAP3HPN737CKJLRBZJ4V3VFVXRPANCNFSM5TFI2YSQ . You are receiving this because you were mentioned.Message ID: @.***>
- name: envoy.filters.listener.proxy_protocol typed_config: '@type': type.googleapis.com/envoy.extensions.filters.listener.proxy_protocol.v3.ProxyProtocol
not remove
TCP proxy
filter, remove theproxy filter
underlistener_filters
name: envoy.filters.listener.proxy_protocol typed_config: '@type': type.googleapis.com/envoy.extensions.filters.listener.proxy_protocol.v3.ProxyProtocol
This is the log content of Envoy process when i use only extention "original_src" for "listener_filters" of TCP proxy:
[2022-04-19 15:37:46.416][24571][debug][main] [external/envoy/source/server/server.cc:209] flushing stats [2022-04-19 15:37:50.846][24581][debug][filter] [external/envoy/source/extensions/filters/listener/original_src/original_src.cc:24] Got a new connection in the original_src filter for address 172.24.206.84:48694. Marking with 0 [2022-04-19 15:37:50.847][24581][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:249] [C0] new tcp proxy session [2022-04-19 15:37:50.847][24581][trace][connection] [external/envoy/source/common/network/connection_impl.cc:349] [C0] readDisable: disable=true disable_count=0 state=0 buffer_length=0 [2022-04-19 15:37:50.847][24581][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:397] [C0] Creating connection to cluster 528_tcp [2022-04-19 15:37:50.847][24581][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:98] creating a new connection [2022-04-19 15:37:50.847][24581][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:383] [C1] connecting [2022-04-19 15:37:50.847][24581][debug][connection] [external/envoy/source/common/network/connection_impl.cc:860] [C1] connecting to 172.24.206.56:528 [2022-04-19 15:37:50.847][24581][debug][connection] [external/envoy/source/common/network/connection_impl.cc:876] [C1] connection in progress [2022-04-19 15:37:50.847][24581][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:125] queueing request due to no available connections [2022-04-19 15:37:50.847][24581][debug][conn_handler] [external/envoy/source/server/active_tcp_listener.cc:328] [C0] new connection [2022-04-19 15:37:50.847][24581][trace][connection] [external/envoy/source/common/network/connection_impl.cc:547] [C0] socket event: 2 [2022-04-19 15:37:50.847][24581][trace][connection] [external/envoy/source/common/network/connection_impl.cc:656] [C0] write ready [2022-04-19 15:37:51.346][24581][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:422] [C1] connect timeout [2022-04-19 15:37:51.346][24581][debug][connection] [external/envoy/source/common/network/connection_impl.cc:133] [C1] closing data_to_write=0 type=1 [2022-04-19 15:37:51.346][24581][debug][connection] [external/envoy/source/common/network/connection_impl.cc:243] [C1] closing socket: 1 [2022-04-19 15:37:51.346][24581][trace][connection] [external/envoy/source/common/network/connection_impl.cc:410] [C1] raising connection event 1 [2022-04-19 15:37:51.346][24581][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:140] [C1] client disconnected [2022-04-19 15:37:51.346][24581][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:559] [C0] connect timeout [2022-04-19 15:37:51.346][24581][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:397] [C0] Creating connection to cluster 528_tcp [2022-04-19 15:37:51.346][24581][debug][connection] [external/envoy/source/common/network/connection_impl.cc:133] [C0] closing data_to_write=0 type=1 [2022-04-19 15:37:51.346][24581][debug][connection] [external/envoy/source/common/network/connection_impl.cc:243] [C0] closing socket: 1 [2022-04-19 15:37:51.346][24581][trace][connection] [external/envoy/source/common/network/connection_impl.cc:410] [C0] raising connection event 1 [2022-04-19 15:37:51.346][24581][debug][conn_handler] [external/envoy/source/server/active_tcp_listener.cc:76] [C0] adding to cleanup list [2022-04-19 15:37:51.346][24581][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:252] item added to deferred deletion list (size=1) [2022-04-19 15:37:51.346][24581][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:252] item added to deferred deletion list (size=2) [2022-04-19 15:37:51.346][24581][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:252] item added to deferred deletion list (size=3) [2022-04-19 15:37:51.346][24581][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:115] clearing deferred deletion list (size=3) [2022-04-19 15:37:51.347][24581][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:255] [C1] connection destroyed [2022-04-19 15:37:51.417][24571][debug][main] [external/envoy/source/server/server.cc:209] flushing stats
@soulxu could you tell me any other ways to solve this issue? Many thanks!
@soulxu do you have any ideas to solve this issue?
@soulxu do you have any ideas to solve this issue?
Sorry, I draft the reply but I forget to send it out.
I didn't get any clue from your log.
Another idea is would you like try my config which works in my local:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address:
protocol: TCP
address: 127.0.0.1
port_value: 9901
static_resources:
listeners:
- name: my_listener
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 13333
listener_filters:
- name: envoy.filters.listener.original_src
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.listener.original_src.v3.OriginalSrc
filter_chains:
- name: tcp
filters:
- name: envoy.tcp_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
stat_prefix: ingress_tcp
cluster: my_cluster
clusters:
- name: my_cluster
connect_timeout: 0.25s
type: static
lb_policy: round_robin
protocol_selection: USE_DOWNSTREAM_PROTOCOL
load_assignment:
cluster_name: my_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 0.0.0.0
port_value: 33333
I have tried your config and the result is the same as my config: Envoy proxy still denied/filtered all TCP packets those were sent to port 528/Tcp of Envoy server, as below logs:
[2022-05-04 09:50:26.473][34493][debug][filter] [external/envoy/source/extensions/filters/listener/original_src/original_src.cc:24] Got a new connection in the original_src filter for address 172.24.206.84:38802. Marking with 0 [2022-05-04 09:50:26.474][34493][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:249] [C0] new tcp proxy session [2022-05-04 09:50:26.474][34493][trace][connection] [external/envoy/source/common/network/connection_impl.cc:349] [C0] readDisable: disable=true disable_count=0 state=0 buffer_length=0 [2022-05-04 09:50:26.474][34493][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:397] [C0] Creating connection to cluster 528_tcp [2022-05-04 09:50:26.474][34493][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:98] creating a new connection [2022-05-04 09:50:26.474][34493][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:383] [C1] connecting [2022-05-04 09:50:26.474][34493][debug][connection] [external/envoy/source/common/network/connection_impl.cc:860] [C1] connecting to 172.24.206.56:528 [2022-05-04 09:50:26.474][34493][debug][connection] [external/envoy/source/common/network/connection_impl.cc:876] [C1] connection in progress [2022-05-04 09:50:26.474][34493][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:125] queueing request due to no available connections [2022-05-04 09:50:26.474][34493][debug][conn_handler] [external/envoy/source/server/active_tcp_listener.cc:328] [C0] new connection [2022-05-04 09:50:26.474][34493][trace][connection] [external/envoy/source/common/network/connection_impl.cc:547] [C0] socket event: 2 [2022-05-04 09:50:26.474][34493][trace][connection] [external/envoy/source/common/network/connection_impl.cc:656] [C0] write ready [2022-05-04 09:50:26.974][34493][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:422] [C1] connect timeout [2022-05-04 09:50:26.974][34493][debug][connection] [external/envoy/source/common/network/connection_impl.cc:133] [C1] closing data_to_write=0 type=1 [2022-05-04 09:50:26.974][34493][debug][connection] [external/envoy/source/common/network/connection_impl.cc:243] [C1] closing socket: 1 [2022-05-04 09:50:26.974][34493][trace][connection] [external/envoy/source/common/network/connection_impl.cc:410] [C1] raising connection event 1 [2022-05-04 09:50:26.974][34493][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:140] [C1] client disconnected [2022-05-04 09:50:26.974][34493][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:559] [C0] connect timeout [2022-05-04 09:50:26.974][34493][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:397] [C0] Creating connection to cluster 528_tcp [2022-05-04 09:50:26.974][34493][debug][connection] [external/envoy/source/common/network/connection_impl.cc:133] [C0] closing data_to_write=0 type=1 [2022-05-04 09:50:26.974][34493][debug][connection] [external/envoy/source/common/network/connection_impl.cc:243] [C0] closing socket: 1 [2022-05-04 09:50:26.974][34493][trace][connection] [external/envoy/source/common/network/connection_impl.cc:410] [C0] raising connection event 1 [2022-05-04 09:50:26.974][34493][debug][conn_handler] [external/envoy/source/server/active_tcp_listener.cc:76] [C0] adding to cleanup list [2022-05-04 09:50:26.974][34493][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:252] item added to deferred deletion list (size=1) [2022-05-04 09:50:26.974][34493][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:252] item added to deferred deletion list (size=2) [2022-05-04 09:50:26.974][34493][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:252] item added to deferred deletion list (size=3) [2022-05-04 09:50:26.974][34493][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:115] clearing deferred deletion list (size=3) [2022-05-04 09:50:26.974][34493][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:255] [C1] connection destroyed
@soulxu do you have any load balancers is in front of your Envoy proxy?
@soulxu do you have any load balancers is in front of your Envoy proxy?
no, I don't have it. I just connected to envoy directly by curl -v HTTP://127.0.0.1:13333
, and I run fortio server --http-port 33333
as backend server. I can get the http 200
returned.
Would you like to try exact same config like me? start something simple first.
@soulxu do you have any load balancers is in front of your Envoy proxy?
no, I don't have it. I just connected to envoy directly by
curl -v HTTP://127.0.0.1:13333
, and I runfortio server --http-port 33333
as backend server. I can get thehttp 200
returned.Would you like to try exact same config like me? start something simple first.
@soulxu Yes, please share your config file to me to try it, many thanks. In other guess, do you think i should use registered/dynamic ports to replace the well-known port (currently i am using port 528/Tcp)?
@soulxu do you have any load balancers is in front of your Envoy proxy?
no, I don't have it. I just connected to envoy directly by
curl -v HTTP://127.0.0.1:13333
, and I runfortio server --http-port 33333
as backend server. I can get thehttp 200
returned. Would you like to try exact same config like me? start something simple first.@soulxu Yes, please share your config file to me to try it, many thanks. In other guess, do you think i should use registered/dynamic ports to replace the well-known port (currently i am using port 528/Tcp)?
the above config is all my setup. you can try to use exact same port and also use fortio as backend server.
@soulxu do you have any load balancers is in front of your Envoy proxy?
no, I don't have it. I just connected to envoy directly by
curl -v HTTP://127.0.0.1:13333
, and I runfortio server --http-port 33333
as backend server. I can get thehttp 200
returned. Would you like to try exact same config like me? start something simple first.@soulxu Yes, please share your config file to me to try it, many thanks. In other guess, do you think i should use registered/dynamic ports to replace the well-known port (currently i am using port 528/Tcp)?
the above config is all my setup. you can try to use exact same port and also use fortio as backend server.
@soulxu i try your config and get the below logs:
[2022-05-04 14:17:31.533][34945][debug][filter] [external/envoy/source/extensions/filters/listener/original_src/original_src.cc:24] Got a new connection in the original_src filter for address 172.24.206.84:56402. Marking with 0 [2022-05-04 14:17:31.533][34945][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:249] [C0] new tcp proxy session [2022-05-04 14:17:31.533][34945][trace][connection] [external/envoy/source/common/network/connection_impl.cc:349] [C0] readDisable: disable=true disable_count=0 state=0 buffer_length=0 [2022-05-04 14:17:31.533][34945][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:397] [C0] Creating connection to cluster my_cluster [2022-05-04 14:17:31.533][34945][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:98] creating a new connection [2022-05-04 14:17:31.533][34945][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:383] [C1] connecting [2022-05-04 14:17:31.533][34945][debug][connection] [external/envoy/source/common/network/connection_impl.cc:860] [C1] connecting to 0.0.0.0:33333 [2022-05-04 14:17:31.533][34945][debug][connection] [external/envoy/source/common/network/connection_impl.cc:876] [C1] connection in progress [2022-05-04 14:17:31.533][34945][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:125] queueing request due to no available connections [2022-05-04 14:17:31.533][34945][debug][conn_handler] [external/envoy/source/server/active_tcp_listener.cc:328] [C0] new connection [2022-05-04 14:17:31.533][34945][trace][connection] [external/envoy/source/common/network/connection_impl.cc:547] [C0] socket event: 2 [2022-05-04 14:17:31.533][34945][trace][connection] [external/envoy/source/common/network/connection_impl.cc:656] [C0] write ready [2022-05-04 14:17:31.782][34945][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:422] [C1] connect timeout [2022-05-04 14:17:31.782][34945][debug][connection] [external/envoy/source/common/network/connection_impl.cc:133] [C1] closing data_to_write=0 type=1 [2022-05-04 14:17:31.782][34945][debug][connection] [external/envoy/source/common/network/connection_impl.cc:243] [C1] closing socket: 1 [2022-05-04 14:17:31.782][34945][trace][connection] [external/envoy/source/common/network/connection_impl.cc:410] [C1] raising connection event 1 [2022-05-04 14:17:31.782][34945][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:140] [C1] client disconnected [2022-05-04 14:17:31.782][34945][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:559] [C0] connect timeout [2022-05-04 14:17:31.782][34945][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:397] [C0] Creating connection to cluster my_cluster [2022-05-04 14:17:31.782][34945][debug][connection] [external/envoy/source/common/network/connection_impl.cc:133] [C0] closing data_to_write=0 type=1 [2022-05-04 14:17:31.782][34945][debug][connection] [external/envoy/source/common/network/connection_impl.cc:243] [C0] closing socket: 1 [2022-05-04 14:17:31.782][34945][trace][connection] [external/envoy/source/common/network/connection_impl.cc:410] [C0] raising connection event 1 [2022-05-04 14:17:31.782][34945][debug][conn_handler] [external/envoy/source/server/active_tcp_listener.cc:76] [C0] adding to cleanup list [2022-05-04 14:17:31.782][34945][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:252] item added to deferred deletion list (size=1) [2022-05-04 14:17:31.782][34945][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:252] item added to deferred deletion list (size=2) [2022-05-04 14:17:31.782][34945][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:252] item added to deferred deletion list (size=3) [2022-05-04 14:17:31.782][34945][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:115] clearing deferred deletion list (size=3) [2022-05-04 14:17:31.782][34945][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:255] [C1] connection destroyed
root 34923 31765 3 14:16 pts/0 00:00:00 /usr/local/sbin/envoy -c envoy_test.yaml -l trace --log-path /old-data/etc/envoy/envoy.log
tcp 0 0 127.0.0.1:9901 0.0.0.0: LISTEN 34923/envoy
tcp 0 0 0.0.0.0:528 0.0.0.0: LISTEN 34923/envoy
tcp 0 0 :::33333 :::* LISTEN 34916/fortio
They look like my logs too, and i think the key line contains words "queueing request due to no available connections". Why does not Envoy proxy find any available connections?
@soulxu do you have any ideas?
Sorry, I have no idea, but it seems failed at connection timeout.
[2022-05-04 14:17:31.782][34945][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:140] [C1] client disconnected [2022-05-04 14:17:31.782][34945][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:559] [C0] connect timeout
Sorry, I have no idea, but it seems failed at connection timeout.
[2022-05-04 14:17:31.782][34945][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:140] [C1] client disconnected [2022-05-04 14:17:31.782][34945][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:559] [C0] connect timeout
@soulxu pls see below logs those are created when i remove "listener_filters" block in envoy.yaml, and all TCP packets are forwarded to endpoints successfully:
[2022-05-05 16:20:23.740][35899][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:249] [C0] new tcp proxy session [2022-05-05 16:20:23.740][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:349] [C0] readDisable: disable=true disable_count=0 state=0 buffer_length=0 [2022-05-05 16:20:23.740][35899][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:397] [C0] Creating connection to cluster 528_tcp [2022-05-05 16:20:23.740][35899][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:98] creating a new connection [2022-05-05 16:20:23.740][35899][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:383] [C1] connecting [2022-05-05 16:20:23.740][35899][debug][connection] [external/envoy/source/common/network/connection_impl.cc:860] [C1] connecting to 172.24.206.70:528 [2022-05-05 16:20:23.740][35899][debug][connection] [external/envoy/source/common/network/connection_impl.cc:876] [C1] connection in progress [2022-05-05 16:20:23.740][35899][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:125] queueing request due to no available connections [2022-05-05 16:20:23.740][35899][debug][conn_handler] [external/envoy/source/server/active_tcp_listener.cc:328] [C0] new connection [2022-05-05 16:20:23.740][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:547] [C0] socket event: 2 [2022-05-05 16:20:23.740][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:656] [C0] write ready [2022-05-05 16:20:23.740][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:547] [C1] socket event: 2 [2022-05-05 16:20:23.740][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:656] [C1] write ready [2022-05-05 16:20:23.740][35899][debug][connection] [external/envoy/source/common/network/connection_impl.cc:665] [C1] connected [2022-05-05 16:20:23.740][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:410] [C1] raising connection event 2 [2022-05-05 16:20:23.740][35899][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:303] [C1] assigning connection [2022-05-05 16:20:23.741][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:349] [C0] readDisable: disable=false disable_count=1 state=0 buffer_length=0 [2022-05-05 16:20:23.741][35899][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:660] [C0] TCP:onUpstreamEvent(), requestedServerName: [2022-05-05 16:20:23.741][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:547] [C0] socket event: 3 [2022-05-05 16:20:23.741][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:656] [C0] write ready [2022-05-05 16:20:23.741][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:585] [C0] read ready. dispatch_buffered_data=false [2022-05-05 16:20:23.741][35899][trace][connection] [external/envoy/source/common/network/raw_buffer_socket.cc:24] [C0] read returns: 11 [2022-05-05 16:20:23.741][35899][trace][connection] [external/envoy/source/common/network/raw_buffer_socket.cc:24] [C0] read returns: 0 [2022-05-05 16:20:23.741][35899][trace][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:570] [C0] downstream connection received 11 bytes, end_stream=true [2022-05-05 16:20:23.741][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:470] [C1] writing 11 bytes, end_stream true [2022-05-05 16:20:23.741][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:547] [C1] socket event: 2 [2022-05-05 16:20:23.741][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:656] [C1] write ready [2022-05-05 16:20:23.741][35899][trace][connection] [external/envoy/source/common/network/raw_buffer_socket.cc:67] [C1] write returns: 11 [2022-05-05 16:20:23.741][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:547] [C1] socket event: 2 [2022-05-05 16:20:23.741][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:656] [C1] write ready [2022-05-05 16:20:23.781][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:547] [C1] socket event: 2 [2022-05-05 16:20:23.781][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:656] [C1] write ready [2022-05-05 16:20:24.005][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:547] [C1] socket event: 3 [2022-05-05 16:20:24.005][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:656] [C1] write ready [2022-05-05 16:20:24.005][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:585] [C1] read ready. dispatch_buffered_data=false [2022-05-05 16:20:24.005][35899][trace][connection] [external/envoy/source/common/network/raw_buffer_socket.cc:24] [C1] read returns: 0 [2022-05-05 16:20:24.005][35899][trace][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:616] [C0] upstream connection received 0 bytes, end_stream=true [2022-05-05 16:20:24.005][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:470] [C0] writing 0 bytes, end_stream true [2022-05-05 16:20:24.005][35899][debug][connection] [external/envoy/source/common/network/connection_impl.cc:633] [C1] remote close [2022-05-05 16:20:24.005][35899][debug][connection] [external/envoy/source/common/network/connection_impl.cc:243] [C1] closing socket: 0 [2022-05-05 16:20:24.005][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:410] [C1] raising connection event 0 [2022-05-05 16:20:24.005][35899][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:140] [C1] client disconnected [2022-05-05 16:20:24.005][35899][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:252] item added to deferred deletion list (size=1) [2022-05-05 16:20:24.005][35899][debug][connection] [external/envoy/source/common/network/connection_impl.cc:133] [C0] closing data_to_write=0 type=0 [2022-05-05 16:20:24.005][35899][debug][connection] [external/envoy/source/common/network/connection_impl.cc:243] [C0] closing socket: 1 [2022-05-05 16:20:24.005][35899][trace][connection] [external/envoy/source/common/network/connection_impl.cc:410] [C0] raising connection event 1 [2022-05-05 16:20:24.005][35899][debug][conn_handler] [external/envoy/source/server/active_tcp_listener.cc:76] [C0] adding to cleanup list [2022-05-05 16:20:24.005][35899][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:252] item added to deferred deletion list (size=2) [2022-05-05 16:20:24.005][35899][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:252] item added to deferred deletion list (size=3) [2022-05-05 16:20:24.005][35899][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:115] clearing deferred deletion list (size=3) [2022-05-05 16:20:24.005][35899][debug][pool] [external/envoy/source/common/tcp/original_conn_pool.cc:255] [C1] connection destroyed
Could you see any key words in the logs?
@soulxu i see both yaml file with and without block "listener_filters", Envoy TCP proxy always generates log "queueing request due to no available connections" in first socket event. But in second socket event, only yaml file without "listener_filters" can establish connection to endpoint successfully.
@soulxu i see both yaml file with and without block "listener_filters", Envoy TCP proxy always generates log "queueing request due to no available connections" in first socket event. But in second socket event, only yaml file without "listener_filters" can establish connection to endpoint successfully.
I think that is normal, the connection pool without available connections after startup.
@soulxu do you have to configure any special settings for server OS (exp: sysctl.conf) or server firewall (exp: iptables, firewalld) in your Envoy server?
@soulxu do you have to configure any special settings for server OS (exp: sysctl.conf) or server firewall (exp: iptables, firewalld) in your Envoy server?
@soulxu do you have to configure any special settings for server OS (exp: sysctl.conf) or server firewall (exp: iptables, firewalld) in your Envoy server?
no, I don't have it. I test this config on the v1.21.1 also, can't reproduce your problem.
@soulxu i use Envoy version 1.18.2/clean-getenvoy-76c310e-envoy/RELEASE/BoringSSL from Tetrate-getenvoy-rpm-stable repository, in CentOS 7.9. I also use Envoy v1.21.1 via func-e in CentOS 7.9 to try your config, but the issue still occurs. (I stop iptables/firewalld in server when i try your config)
@soulxu i try to use Envoy and Func-e in Ubuntu 20.04 (with stop iptables/firewalld in server) to run your config and get the same issue as i try it in CentOS 7.9
@soulxu i have tried Envoy 1.23.0-dev and see the below logs:
[2022-05-18 08:35:39.537][83165][debug][filter] [source/extensions/filters/listener/original_src/original_src.cc:23] Got a new connection in the original_src filter for address 172.24.206.84:50052. Marking with 0 [2022-05-18 08:35:39.537][83165][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:197] [C0] new tcp proxy session [2022-05-18 08:35:39.537][83165][trace][connection] [source/common/network/connection_impl.cc:357] [C0] readDisable: disable=true disable_count=0 state=0 buffer_length=0 [2022-05-18 08:35:39.537][83165][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:369] [C0] Creating connection to cluster 528_tcp [2022-05-18 08:35:39.538][83165][debug][misc] [source/common/upstream/cluster_manager_impl.cc:1937] Allocating TCP conn pool [2022-05-18 08:35:39.538][83165][debug][pool] [source/common/conn_pool/conn_pool_base.cc:268] trying to create new connection [2022-05-18 08:35:39.538][83165][trace][pool] [source/common/conn_pool/conn_pool_base.cc:269] ConnPoolImplBase 0x46227fc53a90, readyclients.size(): 0, busyclients.size(): 0, connectingclients.size(): 0, connecting_streamcapacity: 0, num_activestreams: 0, pendingstreams.size(): 1 per upstream preconnect ratio: 1 [2022-05-18 08:35:39.538][83165][debug][pool] [source/common/conn_pool/conn_pool_base.cc:145] creating a new connection (connecting=0) [2022-05-18 08:35:39.538][83165][debug][connection] [source/common/network/connection_impl.cc:912] [C1] connecting to 172.24.206.56:528 [2022-05-18 08:35:39.539][83165][debug][connection] [source/common/network/connection_impl.cc:931] [C1] connection in progress [2022-05-18 08:35:39.539][83165][trace][pool] [source/common/conn_pool/conn_pool_base.cc:131] not creating a new connection, shouldCreateNewConnection returned false. [2022-05-18 08:35:39.539][83165][debug][conn_handler] [source/server/active_tcp_listener.cc:142] [C0] new connection from 172.24.206.84:50052 [2022-05-18 08:35:39.539][83165][trace][connection] [source/common/network/connection_impl.cc:563] [C0] socket event: 2 [2022-05-18 08:35:39.539][83165][trace][connection] [source/common/network/connection_impl.cc:674] [C0] write ready [2022-05-18 08:35:40.036][83165][debug][pool] [source/common/conn_pool/conn_pool_base.cc:687] [C1] connect timeout [2022-05-18 08:35:40.036][83165][debug][connection] [source/common/network/connection_impl.cc:139] [C1] closing data_to_write=0 type=1 [2022-05-18 08:35:40.036][83165][debug][connection] [source/common/network/connection_impl.cc:250] [C1] closing socket: 1 [2022-05-18 08:35:40.036][83165][trace][connection] [source/common/network/connection_impl.cc:418] [C1] raising connection event 1 [2022-05-18 08:35:40.036][83165][debug][pool] [source/common/conn_pool/conn_pool_base.cc:439] [C1] client disconnected, failure reason: [2022-05-18 08:35:40.036][83165][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:543] [C0] connect timeout [2022-05-18 08:35:40.036][83165][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:369] [C0] Creating connection to cluster 528_tcp [2022-05-18 08:35:40.036][83165][debug][connection] [source/common/network/connection_impl.cc:139] [C0] closing data_to_write=0 type=1 [2022-05-18 08:35:40.036][83165][debug][connection] [source/common/network/connection_impl.cc:250] [C0] closing socket: 1 [2022-05-18 08:35:40.036][83165][trace][connection] [source/common/network/connection_impl.cc:418] [C0] raising connection event 1 [2022-05-18 08:35:40.036][83165][trace][filter] [source/common/tcp_proxy/tcp_proxy.cc:589] [C0] on downstream event 1, has upstream = true [2022-05-18 08:35:40.036][83165][trace][conn_handler] [source/server/active_stream_listener_base.cc:111] [C0] connection on event 1 [2022-05-18 08:35:40.036][83165][debug][conn_handler] [source/server/active_stream_listener_base.cc:120] [C0] adding to cleanup list [2022-05-18 08:35:40.036][83165][trace][main] [source/common/event/dispatcher_impl.cc:249] item added to deferred deletion list (size=1) [2022-05-18 08:35:40.037][83165][trace][main] [source/common/event/dispatcher_impl.cc:249] item added to deferred deletion list (size=2) [2022-05-18 08:35:40.037][83165][trace][pool] [source/common/conn_pool/conn_pool_base.cc:131] not creating a new connection, shouldCreateNewConnection returned false. [2022-05-18 08:35:40.037][83165][trace][main] [source/common/event/dispatcher_impl.cc:249] item added to deferred deletion list (size=3) [2022-05-18 08:35:40.037][83165][debug][pool] [source/common/conn_pool/conn_pool_base.cc:410] invoking idle callbacks - is_draining_fordeletion=false [2022-05-18 08:35:40.037][83165][trace][upstream] [source/common/upstream/cluster_manager_impl.cc:1818] Idle pool, erasing pool for host 0x46227f80c560 [2022-05-18 08:35:40.037][83165][trace][main] [source/common/event/dispatcher_impl.cc:249] item added to deferred deletion list (size=4) [2022-05-18 08:35:40.037][83165][trace][main] [source/common/event/dispatcher_impl.cc:125] clearing deferred deletion list (size=4) [2022-05-18 08:35:40.616][83155][debug][main] [source/server/server.cc:251] flushing stats [2022-05-18 08:35:43.751][83163][debug][filter] [source/extensions/filters/udp/udp_proxy/udp_proxy_filter.cc:311] session idle timeout: downstream=172.24.206.84:50272 local=172.24.206.86:528 [2022-05-18 08:35:43.751][83163][debug][filter] [source/extensions/filters/udp/udp_proxy/udp_proxy_filter.cc:277] deleting the session: downstream=172.24.206.84:50272 local=172.24.206.86:528 upstream=172.24.206.56:528
Could you see any new clues from them?
When you use original_src, the expectation is that Envoy use IP_TRANSPARENT to pretend the packet is from the downstream remote ip.
It means the upstream endpoint 172.24.206.56 will reply to the 172.24.206.84(the client), not to 172.24.206.86(envoy ip).
If either of the two is missing, envoy won't establish tcp connection to 172.24.206.56.
When you use original_src, the expectation is that Envoy use IP_TRANSPARENT to pretend the packet is from the downstream remote ip.
It means the upstream endpoint 172.24.206.56 will reply to the 172.24.206.84(the client), not to 172.24.206.86(envoy ip).
1. Have you set the routingtable/iptable at node 172.24.206.56 reply to nic of 172.24.206.86? (targeting to handle the SYN/ACK 172.24.206.56 -> 172.24.206.86) 2. Have you set the routingtable/iptable at node 172.24.206.86 to let envoy/OS handle that above SYN/ACk)?
If either of the two is missing, envoy won't establish tcp connection to 172.24.206.56.
@lambdai thanks for your help. I check route table in 172.24.206.86 (envoy proxy) - 172.24.206.56 (endpoint) and they look like seem OK as below:
root@86# ip route show
default via 172.24.206.1 dev ens160 proto static
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.24.206.0/24 dev ens160 proto kernel scope link src 172.24.206.86
172.24.206.56 via 172.24.206.86 dev ens160
172.24.206.70 via 172.24.206.86 dev ens160
root@86# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.24.206.1 0.0.0.0 UG 0 0 0 ens160 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 172.24.206.0 0.0.0.0 255.255.255.0 U 0 0 0 ens160 172.24.206.56 172.24.206.86 255.255.255.255 UGH 0 0 0 ens160 172.24.206.70 172.24.206.86 255.255.255.255 UGH 0 0 0 ens160
root@86# telnet 172.24.206.56 528 Trying 172.24.206.56... Connected to 172.24.206.56. Escape character is '^]'.
[root@56]# ip route show default via 172.24.206.1 dev ens224 proto static metric 102 172.24.206.0/24 dev ens224 proto kernel scope link src 172.24.206.56 metric 102
[root@56]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.24.206.1 0.0.0.0 UG 102 0 0 ens224 172.24.206.0 0.0.0.0 255.255.255.0 U 102 0 0 ens224
@lambdai do you see any incorrect configurations from them?
@thangnn4688
I could be wrong but IMHO it seems neither route table is complete. However, normally the specialize routing rules are stored in tables other than table 0. And you may want to dump the rules by ip route show table all
.
The point is it's you that set up the routing table. Envoy original_src listener filter should be configured to align with your routing.
Checkout this example https://docs.mitmproxy.org/stable/howto-transparent/#full-transparent-mode-on-linux
@thangnn4688 I could be wrong but IMHO it seems neither route table is complete. However, normally the specialize routing rules are stored in tables other than table 0. And you may want to dump the rules by
ip route show table all
.The point is it's you that set up the routing table. Envoy original_src listener filter should be configured to align with your routing.
Checkout this example https://docs.mitmproxy.org/stable/howto-transparent/#full-transparent-mode-on-linux
@lambdai i tried your URL to configure Iptables as its guide, but still can not make "envoy.filters.listener.original_src" to work proberly. Could you share me a simple example of iptables configuration that can make "envoy.filters.listener.original_src" to work proberly?
@lambdai could you help me to create a basic configuration file for Iptables to support extension "envoy.filters.listener.original_src" ? I had tried to do it many times but still not ok.
@lambdai in my lab, downstream server and upstream server are in the same subnet (vlan). I read some documents about TPROXY with method Direct Server Return, thay said that downstream server and upstream server should not be in the same subnet. I use tcpdump to capture packet flow in downstream server, envoy server and upstream server when downstream send tcp packet to upstream, then i see the session SYN between downstream and upstream (not go through Envoy proxy) but it's not success. Do you think i miss any DSR configurations in three nodes?
Hello,
I have to build Envoy TCP Proxy as load balancer to forward TCP packets (logs) from some systems to Splunk server. I configured TCP proxy in envoy.yaml as below:
===== static_resources: listeners:
name: listener_528tcp reuse_port: true address: socket_address: protocol: TCP address: 0.0.0.0 port_value: 528
listener_filters:
filter_chains:
per_connection_buffer_limit_bytes: 32768
I use envoy-v1.21.1 to test configuration file and the result is OK, but when i start envoy process then push TCP packets to port 528 TCP of envoy proxy, it does not forward TCP packets to endpoints. I check endpoints by command "tcpdump -i ens224 tcp port 528 -vv" and don't see any TCP packets were forwarded from envoy proxy. I try to delete "listener_filters" block and restart envoy proxy, and push TCP packets to port 528 TCP of envoy proxy then i check endpoints by command "tcpdump -i ens224 tcp port 528 -vv" and i can see TCP packets are sent to endpoints, but the log body contains the IP address of envoy proxy (is not remote/client IP address). I think my listener_filters block has some configuration issues, but i can not find the reason. Please help me to solve this case, thanks very much!!