openspeedtest / Nginx-Configuration

Nginx-Configuration for OpenSpeedTest Server
29 stars 14 forks source link

Weird upload speeds #9

Closed CrazyWolf13 closed 5 months ago

CrazyWolf13 commented 5 months ago

Hi @openspeedtest I looked a long time through your repo, the open issues and the example nginx guide, I'm running openspeedtest with nginxProxyManager and SSL, I only enabled Cache assets through nginxproxymanager.

With this I can get download to work however with a difference of around 40% to the ip. Upload is not working and is like 99% to low and the network tab shows all timeouts on the upload.

Could you point me into the right direction?

Directly pasting in the example config ofc. not works as I run NginxProxyManager

Use this here for reference: https://github.com/openspeedtest/Speed-Test/issues/4#issuecomment-1235696758

Here my advanced options section of the proxy host: client_max_body_size 100000m;

Thanks!

CrazyWolf13 commented 5 months ago

After adding error_page 405 =200 $uri;

as supposed here #5 it's not chaning anything at all, upload speeds are still 5mb instead of 1000 and download around 600 instead of 1000, network tab still shows errors on upload: image

openspeedtest commented 5 months ago

@CrazyWolf13 what's the error code? Click the red http requests. (If it's 413 change NPM settings and fix the issue.) make sure you Use HTTP 1.1. if you need to use http 2 or 3 you need to check the Nginx config and implement the mentioned fix.

CrazyWolf13 commented 5 months ago

@openspeedtest There seems to be no html code at all, on the download ones i see a green 200 OK on the red ones I only see this: image

Adding proxy_http_version 1.1;

Makes the webpage go offline, can you explain a bit more which "mentioned" fix ?

openspeedtest commented 5 months ago

@CrazyWolf13 Check response tab.

CrazyWolf13 commented 5 months ago

@openspeedtest image

openspeedtest commented 5 months ago
Screenshot 2024-04-20 at 8 50 50 PM Screenshot 2024-04-20 at 8 50 38 PM
CrazyWolf13 commented 5 months ago

Not sure what you want to tell me with that screenshot, I see it's working for you,but as you can see for me it does not.

CrazyWolf13 commented 5 months ago

@openspeedtest When pasting nearly the full config from the nginx example:

        client_max_body_size 35m;
        error_page 405 =200 $uri;
        access_log off;
        gzip off; 
        fastcgi_read_timeout 999;
        log_not_found off;
        server_tokens off;
        error_log /dev/null; #Disable this for Windows Nginx.
        tcp_nodelay on;
        tcp_nopush on;
        sendfile on;
        open_file_cache max=200000 inactive=20s;
        open_file_cache_valid 30s;
        open_file_cache_min_uses 2;
        open_file_cache_errors off;

        location ~ /\.well-known/acme-challenge/ {
            allow all;
            default_type "text/plain";
            root /usr/share/nginx/html/;
            try_files $uri =404;
            break;
        }

        location / {            
            add_header 'Access-Control-Allow-Origin' "*" always;
            add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Mx-ReqToken,X-Requested-With' always;
            add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
            #Very Very Important! You SHOULD send no-store from server for Google Chrome.
            add_header Cache-Control 'no-store, no-cache, max-age=0, no-transform';
            add_header Last-Modified $date_gmt;
            if_modified_since off;
            expires off;
            etag off;

            if ($request_method = OPTIONS ) {
                add_header 'Access-Control-Allow-Credentials' "true";
                add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Mx-ReqToken,X-Requested-With' always;
                add_header 'Access-Control-Allow-Origin' "$http_origin" always;        
                add_header 'Access-Control-Allow-Methods' "GET, POST, OPTIONS" always;
                return 200;
            }
        }

       #HTTP2 & HTTP3 will not wait for the post body and return 200. We need to stop that behaviour.
       #Otherwise, you will see abnormal upload speed. To fix this issue, Enable the following lines. (Only Applicable If you Enabled HTTP2 or HTTP3 in This Server.)

       #HTTP2 & HTTP3 -> UPLOAD FIX -- START

       #location = /upload {
       #    proxy_pass http://127.0.0.1:3000/dev-null;
       #}
       #location = /dev-null {
       #    return 200;
       #}

       #HTTP2 & HTTP3 -> UPLOAD FIX -- END

      # Caching for Static Files,
      location ~* ^.+\.(?:css|cur|js|jpe?g|gif|htc|ico|png|html|xml|otf|ttf|eot|woff|woff2|svg)$ {
          access_log off;
          expires 365d;
          add_header Cache-Control public;
          add_header Vary Accept-Encoding;
          tcp_nodelay off;
          open_file_cache max=3000 inactive=120s;
          open_file_cache_valid 45s;
          open_file_cache_min_uses 2;
          open_file_cache_errors off;
          gzip on; 
          gzip_disable "msie6";
          gzip_vary on;
          gzip_proxied any;
          gzip_comp_level 6;
          gzip_buffers 16 8k;
          gzip_http_version 1.1;
          gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript image/svg+xml;
      }

image

I get the default openresty page somehow

CrazyWolf13 commented 5 months ago

This code seemed to cause this, removing it fixed the openresty page appearing, however the upload speeds are still hilariously wrong.

How can it be so difficult to provide a working example.

        location / {            
            add_header 'Access-Control-Allow-Origin' "*" always;
            add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Mx-ReqToken,X-Requested-With' always;
            add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
            #Very Very Important! You SHOULD send no-store from server for Google Chrome.
            add_header Cache-Control 'no-store, no-cache, max-age=0, no-transform';
            add_header Last-Modified $date_gmt;
            if_modified_since off;
            expires off;
            etag off;

            if ($request_method = OPTIONS ) {
                add_header 'Access-Control-Allow-Credentials' "true";
                add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Mx-ReqToken,X-Requested-With' always;
                add_header 'Access-Control-Allow-Origin' "$http_origin" always;        
                add_header 'Access-Control-Allow-Methods' "GET, POST, OPTIONS" always;
                return 200;
            }
        }
CrazyWolf13 commented 5 months ago

Also on this I see no logical way that this would work, it passes all on /upload to localhost port 3000 to dev-null ? And the normal upload packets of course will also go to that url, so obv. there is a network error.

   #location = /upload {
   #    proxy_pass http://127.0.0.1:3000/dev-null;
   #}
   #location = /dev-null {
   #    return 200;
   #}
openspeedtest commented 5 months ago

@CrazyWolf13 Remove the proxy from the equation and use a Docker image. Alternatively, you can use Nginx along with the provided configuration. I tested NPM in 2022, and it worked without any issues. I will test NPM tomorrow and post my configuration. The fix for HTTP/2 and HTTP/3 in my Nginx config will accept the POST request and send it to the HTTP/1.1 upstream, then return a 200 status code. This ensures that the server receives the full payload. Many people, including myself, have tested this configuration, and it works.

openspeedtest commented 5 months ago

@CrazyWolf13 Check the HTTP status code and debug the issue. If you use a Docker image, you will face zero issues.

openspeedtest commented 5 months ago

@CrazyWolf13 Make sure everything is working fine without a proxy. Then add the proxy and debug. Every proxy works a little differently. Depending on the proxy, the configuration changes for the proxy server. You don't need to change anything in the OpenSpeedTest Docker. All you need to do is proxy the traffic to the OpenSpeedTest Server HTTP port.

CrazyWolf13 commented 5 months ago

, and it worked without any issues. I will test NPM tomorrow and post my configuration. The fix for HTTP/2 and HTTP/3 in my Nginx config will accept the POST request and send it to the HTTP/1.1 upstream, then return a 200 status code. This ensures that the server re

Thank you very much!

Let me know once you have a working config for NPM. thanks!

openspeedtest commented 5 months ago

@CrazyWolf13 please post more details about your setup. Including docker compose etc. because NPM worked without additional configuration last time. Also post the status code you received for post request using NPM. And your NPM configuration. I will try to reproduce your issue tomorrow. Thanks

CrazyWolf13 commented 5 months ago

Not using docker compose, but my docke cmd docker run -dit \ --name openspeedtest \ --restart unless-stopped \ --net vlan20 \ --ip 10.10.20.200 \ -p 3000:3000 \ openspeedtest/latest

On NPM I created the proxy host with the following options, but I basically tried every combination.

Cache: on Websockets: on Common exploits: on SSL: force SSL: my wildcart cert HSTS: on HSTS subdomain: on Custom Config:

        client_max_body_size 35m;
        error_page 405 =200 $uri;
        access_log off;
        sendfile on;
        gzip off; 
        fastcgi_read_timeout 999;
        log_not_found off;
        server_tokens off;
        error_log /dev/null; #Disable this for Windows Nginx.
        tcp_nodelay on;
        tcp_nopush on;

        open_file_cache max=200000 inactive=20s;
        open_file_cache_valid 30s;
        open_file_cache_min_uses 2;
        open_file_cache_errors off;

      location ~* ^.+\.(?:css|cur|js|jpe?g|gif|htc|ico|png|html|xml|otf|ttf|eot|woff|woff2|svg)$ {
          access_log off;
          expires 365d;
          add_header Cache-Control public;
          add_header Vary Accept-Encoding;
          tcp_nodelay off;
          open_file_cache max=3000 inactive=120s;
          open_file_cache_valid 45s;
          open_file_cache_min_uses 2;
          open_file_cache_errors off;
          gzip on; 
          gzip_disable "msie6";
          gzip_vary on;
          gzip_proxied any;
          gzip_comp_level 6;
          gzip_buffers 16 8k;
          gzip_http_version 1.1;
          gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript image/svg+xml;
      }

Let me know if more info is needed, thanks !

openspeedtest commented 5 months ago

Working as Expected.

https://github.com/openspeedtest/Nginx-Configuration/assets/51720450/14de9ffc-dec6-42e3-a03b-eb8819c7a3c3

openspeedtest commented 5 months ago

With this I can get download to work however with a difference of around 40% to the ip. Upload is not working and is like 99% to low and the network tab shows all timeouts on the upload.

When you set up a proxy, the server will need to do double the work. Only create a proxy when it's absolutely necessary. Increasing CPU cores can solve issues like this. My local server with a 3900XT can proxy 10Gbps using NPM without any issues. However, my Raspberry Pi struggles to proxy 1Gbps using NPM. Additionally, upload speed should not be consistently low at 99%; that could indicate a problem with your setup. @CrazyWolf13

openspeedtest commented 5 months ago

npm

openspeedtest commented 5 months ago

That server was in another continent. I tested it on a server located next to my state. With and without NPM, I observed not much difference. However, my line speed was around 250 Mbps for both upload and download. NPM locally can achieve 10 Gbps or more, with or without encryption, on capable hardware.

What is your specific use case? If all you need is Let's Encrypt encryption, you can check this guide: link. Remove the proxy server from the middle to get maximum performance out of your hardware.

CrazyWolf13 commented 5 months ago

@openspeedtest Thank you for looking into this!

I have removed all the special options and now I get a much more valid download rate, IP is like 980mbit while proxy is 890-940 which seems quite perfect.

However upload sits still at 20-120 while with ip its 950mbit, so here seems to still be an issue.

I see the upload ones are cancelled on your's as well, is that really expected?

I see when doing it on the openspeedtest public website, there are some canceled and some non canceled upload segments.

CrazyWolf13 commented 5 months ago

@openspeedtest Oh let me correct myself with the 99% I mean the server is running at just 1% of the upload speed I achieve via ip, so the around 20mbit compared to 1000mbits

openspeedtest commented 5 months ago

@CrazyWolf13 For upload test. the app will attempt to establish six parallel HTTP connections to the server. 6*30 = 180MB in 12 seconds (the default test duration), if your connection is faster and you can upload more than 180MB in 12 seconds, the app will resend the finished request. Regardless of speed, the app will always try to maintain six parallel connections to the server for both download and upload. Since your speed is too low, the app canceled the pending requests.

If you use Docker without anything in between, you won't encounter such issues. Ensure you do this first, because encountering issues without adding anything in between implies potential problems with the firewall or your client hardware, such as the switch or various other factors. If a normal Docker image works fine, then you can proceed to check for protocol or other proxy configuration issues.

CrazyWolf13 commented 5 months ago

@openspeedtest Oh okay, yeah that makes sense now.

I'm running openspeedtest on docker yeah.

The issues don't occur when using without nginxproymanager

So it must be a problem with the config, still.

Can you confirm that for you the upload speeds are the same when using ip as well as when using reverse proxy like npm?

openspeedtest commented 5 months ago

Can you confirm that for you the upload speeds are the same when using ip as well as when using reverse proxy like npm?

Yes, if your hardware is powerful. My 3900xt can proxy 10Gbps with and without HTTPS encryption. Client was M1 MacMini with 10Gbps.

Download and upload speeds will not be the same on a Raspberry Pi when using NPM. I will run a test and report the exact speed I am getting on my local network. Therefore, depending on the hardware, the download and upload speeds may vary. However, this is only applicable for very slow devices like Raspberry Pi or low-end shared servers.

CrazyWolf13 commented 5 months ago

Yeah I totally get that, but a difference of over 80% (120 to 1000mbit) at default is just a bit too much for just "may vary a bit".

Thanks for testing tough.

openspeedtest commented 5 months ago

HTTP with NPM

Screenshot 2024-04-21 at 3 56 24 PM

HTTP Without NPM

Screenshot 2024-04-21 at 3 57 10 PM

HTTPS with NPM

Screenshot 2024-04-21 at 3 58 26 PM

HTTPS Without NPM

Screenshot 2024-04-21 at 3 59 10 PM

The Raspberry Pi should perform 5 to 10% better than this, but this Raspberry Pi 4B is running an OS via ESXi. It's running Debian ARM with CasaOS on top, along with other services. There's 2 x Gigabyte switch on each end, 1 media converter, and a router in between.

CrazyWolf13 commented 5 months ago

Thank you!

After testing the config you sent me again, this time it seems to be pretty much identical to using the ip directly, so I think it's finally working, huge thanks.