chobits / ngx_http_proxy_connect_module

A forward proxy module for CONNECT request handling
BSD 2-Clause "Simplified" License
1.84k stars 498 forks source link

Is it possible to relay the request to another proxy? #210

Open cosmozhang1995 opened 2 years ago

cosmozhang1995 commented 2 years ago

For example we have a client C, an HTTPS site S, and two nginx proxy servers A and B:

  1. Client C sends HTTPS request to proxy server A.
  2. A proxies the request through another proxy server B.
  3. B proxies the request to the real site server S.

This case may be similar to issue #118, but I think they are different.


I expect the network topo should be like this:

C ------------ CONNECT S -----------> A ------------ CONNECT S -----------> B ---- establish TCP connection ----> S
|                                     |                                     |                                     |
| ------- https client hello -------> | ------- https client hello -------> | ------- https client hello -------> |
|                                     |                                     |                                     |
| <------ https server hello -------- | <------ https server hello -------- | <------ https server hello -------- |
|                                     |                                     |                                     |
|               ......                |               ......                |               ......                |

So I configured like the following:

Proxy A (192.168.130.7 a.proxy.example.com):

upstream proxy_B {
    server 192.168.130.1:8080;
    keepalive 2000;
}

server {
    listen 8080 default_server;
    listen [::]:8080 default_server;

    server_name a.proxy.example.com;

    proxy_connect;
    proxy_connect_allow     all;
    proxy_connect_connect_timeout   10s;
    proxy_connect_read_timeout  10s;
    proxy_connect_send_timeout  10s;
    proxy_connect_address       192.168.130.1:8080;

    location / {
        proxy_pass http://proxy_B;
        proxy_set_header Host $host;
    }
}

Proxy B (192.168.130.1 b.proxy.example.com):

server {
    resolver 8.8.8.8;

    listen 8080 default_server;
    listen [::]:8080 default_server;

    server_name b.proxy.example.com;

    proxy_connect;
    proxy_connect_allow     443 8443;
    proxy_connect_connect_timeout   10s;
    proxy_connect_read_timeout  10s;
    proxy_connect_send_timeout  10s;

    location / {
        proxy_pass $scheme://$host$request_uri;
    }
}

Then test on client C like this:

curl "https://server.example.com/" -x "http://a.proxy.example.com:8080" -vvv

But the request failed:

*   Trying 192.168.130.7:8080...
* TCP_NODELAY set
* Connected to a.proxy.example.com (192.168.130.7) port 8080 (#0)
* allocate connect buffer!
* Establish HTTP proxy tunnel to server.example.com:443
> CONNECT server.example.com:443 HTTP/1.1
> Host: server.example.com:443
> User-Agent: curl/7.68.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 200 Connection Established
< Proxy-agent: nginx
<
* Proxy replied 200 to CONNECT request
* CONNECT phase completed!
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CONNECT phase completed!
* CONNECT phase completed!
* error:1408F10B:SSL routines:ssl3_get_record:wrong version number
* Closing connection 0
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number

While, directly call server through proxy B is OK:

curl "https://server.example.com/" -x "http://b.proxy.example.com:8080"

It seems like the actually network flow is like this:

C ------------ CONNECT S -----------> A ---- establish TCP connection ----> B ---- establish TCP connection ----> S
|                                     |                                     |                                     |
| ------- https client hello -------> | ------- https client hello -------> |                                     |
|                                     |  (expect plain HTTP request line)   |                                     |
|                                     |                                     |                                     |
| <---- plain HTTP 400 response ----- | <---- plain HTTP 400 response ----- |                                     |
|     (expect https server hello)     |                                     |                                     |

It seems that, proxy A received CONNECT request from C, but only established connection to B. And the following HTTPS client HELLO request sent by C to A were simply forwarded to B. However, B expected a plain HTTP request, so it couldn't recognize the binary HELLO request, and responded a plain HTTP 400 error response to A. And A simply forwarded the response to C. C treated it as an HTTPS server HELLO response and tried to parse the TLS negotiation information from the packet. Obviously it ended up with a failure.

In my opinion, issue #118 is a little different from this case. In that case, squid server (corresponding to B in our case) acts as a real server but not a proxy server. The http content is retrieved from S and decrypted on B, and then re-encrypted and sent through A to C. There is only one proxy server A in fact.


On the other hand, commenting out the proxy_connect_address line is also not the desired solution. As it seems to cause A to try to establish TCP connection with S by itself, and then forward the following packets to S directly. If DNS resolver is not configured, A will be unable to resolve the address of S. And thus unable to establish the connection, which lead to a 502 error.


The question is, is there any way for A to simply forward the CONNECT request to B and leave the connection-establishing job to the later, and to forward the following packets to B and let B forward them to S?

cosmozhang1995 commented 2 years ago

Here is a solution: Use nginx stream module on A, instead of using the http proxy module.

stream {
    server {
        listen 8080;
        proxy_pass b.proxy.example.com:8080;
    }
}
sorbing commented 2 years ago

I am also trying to connect: [Client] -> [Nginx forward proxy + Cache] -> [Proxy Server]. Unfortunately stream.server as upstream not support cache ability. Is it possible to specify: proxy_pass http://external_forward_proxy_upstream;. Thanks.

tpanum commented 5 months ago

Sad to find this thread over 2 years later and no updates :-(

chobits commented 4 months ago

Sad to find this thread over 2 years later and no updates :-(

The Stream module is a good way to resolve this issue (see https://github.com/chobits/ngx_http_proxy_connect_module/issues/210#issuecomment-1035787935).

This module is designed only for the HTTP CONNECT tunnel method and is not a complete solution for many complex scenarios. From my experience discussing with users on GitHub, many try to find a complete solution for their specific scenario. While I provide solutions when possible, some requests are beyond this module's capability. For example https://github.com/chobits/ngx_http_proxy_connect_module/issues/210#issuecomment-1076917647, caching content in requests goes against the module's design philosophy. It is intended to be a traffic proxy, not to unpack and manipulate traffic, as this would make the implementation complex and unstable. It would also require more work technically and in terms of user experience (e.g., only specific types of content packets could be unpacked).

chobits commented 4 months ago

But I still welcome everyone to discuss. If certain features do not go against the original design, they can be incorporated into this module. From user's scenarios, I have discovered many interesting use cases that I hadn't initially considered, even though they sometimes exceed the capabilities of this module.