Open pgabrys94 opened 8 months ago
Why did you put the TeamSpeak server behind NPM? This network flow feels pointless to me. Just expose the TeamSpeak server directly.
Why do you bother with answer if it doesn't bring anything up? NPM offers this functionality so i used it. Why? Maybe as another security layer, maybe i just want to, maybe i'm trying out things. Point is: it's not working how it is supposed to work.
This may be due to a couple of things.
proxy host
option the stream
option does not look like it allows for custom configuration so you're going to have to get down and dirty for this one[1] I found this article which basically states everything I wanted to say in this regard. I heavily use nginx to reverse proxy streams of all types but have never documented any of the strangeness I've had to overcome. The author here describes things in decent detail for UDP connections.
[2] This is the stream template which does not have any of the nginx directives/configuration options mentioned in the link in [[1]]
My basic example exemplifies my above comments, I created a dummy stream from incoming port 8888 to localhost port 8887 to show the config that's generated.
# ------------------------------------------------------------
# 8888 TCP: 0 UDP: 1
# ------------------------------------------------------------
server {
listen 8888 udp;
listen [::]:8888 udp;
proxy_pass 127.0.0.1:8887;
# Custom
include /data/nginx/custom/server_stream[.]conf;**Fixing things in the future**
1. There needs to be an enhancement for NPM to allow custom config for streams
include /data/nginx/custom/server_stream_udp[.]conf;
}
Take note of the last line here. While custom configuration is not currently available through the NPM interface, it is available by custom configuration file that will not break NPM's functionality.
There needs to be an enhancement for NPM to allow custom config for streams. I filed enhancement here
@pgabrys94 Let me know if above explanation is enough to point you in the right direction. If you read the article I linked in [[1]] it should elaborate on some of the timeouts and connection issues you're hitting here.
If that stuff does not directly fix the problem it should give you enough insight into how to debug, and some of the discrepancies involved. The other thing I might mention is tcpdump the container itself as well as the host of the container engine
@pgabrys94 Let me know if above explanation is enough to point you in the right direction. If you read the article I linked in [[1]] it should elaborate on some of the timeouts and connection issues you're hitting here.
If that stuff does not directly fix the problem it should give you enough insight into how to debug, and some of the discrepancies involved. The other thing I might mention is tcpdump the container itself as well as the host of the container engine
Thanks for your response @bluekitedreamer, this article you have pointed allowed me to fix my issue. However, adding custom configuration in separate file broke proxy completely (yet there is a chance i did something wrong).
My solution was to append "reuseport" flag directly to "listen" directive in data/nginx/stream/*.conf:
`server { listen 9987 udp reuseport;
proxy_pass my.ts3.local:9987;
include /data/nginx/custom/server_Stream[.]conf; include /data/nginx/custom/server_Stream_udp[.]conf;`
Although i do know it is not persistent, since changing anything by WebUI will replace config content. I wasn't able to find how to just append "reuseport" at the end of a listen directive, and copying whole config content into server_stream_udp.conf (without includes) with added "reuseport" kept crashing my proxy.
@pgabrys94 Awesome, glad to hear you got it working!
Although i do know it is not persistent, since changing anything by WebUI will replace config content. I wasn't able to find how to just append "reuseport" at the end of a listen directive, and copying whole config content into server_stream_udp.conf (without includes) with added "reuseport" kept crashing my proxy.
Okay that's important that you know that, any GUI changes will override this (I think). I haven't torn apart the innards of the stream templating code in this project so I can't say for sure.
The main nginx.conf
looks like this for stream config file reference:
stream {
# Files generated by NPM
include /data/nginx/stream/*.conf;
# Custom
include /data/nginx/custom/stream[.]conf;
}
include /data/nginx/custom/stream[.]conf;
This is the important piece, instead of important everything using *
, the config is looking for a specific filename.
What did you name your custom stream file and which directory did you place it in?
/data/nginx/custom/stream.conf
It's intention is to put server code blocks in that file. In this case you could copy the generated code block plus your addition of the reuse
line into this file. Keep in mind this would only exist outside NPM. It essentially allows you to run custom server blocks of which cannot be done inside NPM, basically your use case.
You could also place it here include /data/nginx/custom/server_Stream[.]conf;
as noted in the config you pasted.
I misspoke in my other bug creation, NPM is not allowing you to extend the server block like I thought it was.
I missed this point in your comment
and copying whole config content into server_stream_udp.conf
The file you used is named server_stream_udp.conf
or server_Stream_udp.conf
? The capital S
in the second word Stream
is important here.
Issue is now considered stale. If you want to keep it open, please comment :+1:
Checklist
jc21/nginx-proxy-manager:latest
docker image?Describe the bug
Really high packet loss rate (around 80%) when using stream option to pass traffic on 9987:9987/udp. No issues when skipping proxy (direct from router to ts3 server)
Nginx Proxy Manager Version
2.11.1
To Reproduce Steps to reproduce the behavior:
Expected behavior
No packet loss
Screenshots
Operating System
Both proxy and teamspeak are running on PVE
Additional context
-using pve firewall -ping seems ok, around 33ms from client to server by WAN connection