XTLS / Xray-core

Xray, Penetrates Everything. Also the best v2ray-core, with XTLS support. Fully compatible configuration.
https://t.me/projectXray
Mozilla Public License 2.0
24.19k stars 3.81k forks source link

SplitHTTP h3 h2 multiplex controller #3560

Open mmmray opened 1 month ago

mmmray commented 1 month ago

Originally this was reported as a panic under #3556, and the changes in there had some effect on this. But slowly the issue became about some unrelated v2rayNG bug. That bug is fixed now, but the dialerProxy issue remains.

configs:

config-sh-h3.json config-sh-h3-server.json

./xray -c config-sh-h3-server.json
./xray -c config-sh-h3.json

command to reproduce:

$ curl -x socks5h://127.0.0.1:2080 ifconfig.me
curl: (52) Empty reply from server

error in the logs when using d8994b7:

transport/internet/splithttp: failed to send download http request > Get "https://127.0.0.1:6001/6e67de80-f752-4df0-a828-3bcc3d1aaaf6": transport/internet/splithttp: unsupported connection type: %T&{reader:0xc0004658f0 writer:0xc000002250 done:0xc0002c84e0 onClose:[0xc000002250 0xc000002278] local:0xc000465890 remote:0xc0004658c0}

when reverting d8994b7, the client crashes instead:

panic: interface conversion: net.Conn is *cnc.connection, not *internet.PacketConnWrapper

goroutine 67 [running]:
github.com/xtls/xray-core/transport/internet/splithttp.getHTTPClient.func2({0x15735c8, 0xc000311ae0}, {0x0?, 0xc00004f700?}, 0xc00031e4e0, 0xc0001e14d0)
        github.com/xtls/xray-core/transport/internet/splithttp/dialer.go:108 +0x145
github.com/quic-go/quic-go/http3.(*RoundTripper).dial(0xc0002f7ce0, {0x15735c8, 0xc000311ae0}, {0xc00034ea30, 0xe})
        github.com/quic-go/quic-go@v0.45.1/http3/roundtrip.go:312 +0x27a
github.com/quic-go/quic-go/http3.(*RoundTripper).getClient.func1()
        github.com/quic-go/quic-go@v0.45.1/http3/roundtrip.go:249 +0x77
created by github.com/quic-go/quic-go/http3.(*RoundTripper).getClient in goroutine 66
        github.com/quic-go/quic-go@v0.45.1/http3/roundtrip.go:246 +0x289
mmmray commented 1 month ago

QUIC transport probably has identical issue: https://github.com/XTLS/Xray-core/blob/a0040f13dd42264bf0790ce4fe770fd350fae585/transport/internet/quic/dialer.go#L151-L161

RPRX commented 1 month ago

这个是 common/net/cnc/connection.go 下的 type connection struct,但它还没有实现 net.PacketConn,我写一下

RPRX commented 1 month ago

即使实现了 ReadFrom 和 WriteTo,type connection struct 的 local 和 remote 都是 0.0.0.0,最后在 quic-go 这里 panic 了:

func (m *connMultiplexer) AddConn(c indexableConn) {
    m.mutex.Lock()
    defer m.mutex.Unlock()

    connIndex := m.index(c.LocalAddr())
    p, ok := m.conns[connIndex]
    if ok {
        // Panics if we're already listening on this connection.
        // This is a safeguard because we're introducing a breaking API change, see
        // https://github.com/quic-go/quic-go/issues/3727 for details.
        // We'll remove this at a later time, when most users of the library have made the switch.
        panic("connection already exists") // TODO: write a nice message
    }
    m.conns[connIndex] = p
}

或许给 local 随便填个值骗一下它?此外我不确定 cnc 的另一端是否知道这是 UDP 而不是 TCP,从 WG 能工作来看可能是知道

RPRX commented 1 month ago

其实这个问题可以以后解决,甚至无需解决,毕竟 SplitHTTP H3 基本上无需结合 dialerProxy,我是原来出站的配置没改好才遇到的

RPRX commented 1 month ago

我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?https://github.com/XTLS/Xray-core/issues/3560#issuecomment-2240883881 也是出现第二条连接时才 panic,有一点点共通性

RPRX commented 1 month ago

我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?

@dyhkwong 这个问题,应该不是必须手动 OpenStream() 吧

RPRX commented 1 month ago

SplitHTTP H3 也有 globalDialerMap,但比较奇怪的是 quic-go 的 http3 没自动复用连接,每次都 Dial,是哪里没设置好吗

Fangliding commented 1 month ago

可能 quic-go/http3 就没支持吧 自己没实现stream 复用earlyConnection或者UDPConn都会报错( 还是mux吧

Mfire2001 commented 1 month ago

Maybe quic-go/http3 doesn't support it. I didn't implement stream myself. Reusing earlyConnection or UDPConn will result in an error ( or mux).

According to rfc 9114 section 4.1 only one request can be sent on each stream

A client sends an HTTP request on a request stream, which is a client-initiated bidirectional QUIC stream; see Section 6.1. A client MUST send only a single request on a given stream. A server sends zero or more interim HTTP responses on the same stream as the request, followed by a single final HTTP response

RPRX commented 1 month ago

可能 quic-go/http3 就没支持吧 自己没实现stream 复用earlyConnection或者UDPConn都会报错(

https://github.com/XTLS/Xray-core/pull/3565#issuecomment-2241348793 上行那么多 POST 总不能都是开新连接吧,那也太酸爽了,感觉还是能复用的,“但不知道为什么代理个新连接它就不复用了”,难道是因为 GET?@mmmray what do u think?

还是mux吧

否了,MUX over QUIC 会有队头阻塞,H3 的一大优势就没了

RPRX commented 1 month ago

否了,MUX over QUIC 会有队头阻塞,H3 的一大优势就没了

看了下群,防止误解,这里指的是 Xray 的 MUX over QUIC 的 single stream

mmmray commented 1 month ago

I have only seen this lack of connection reuse with HTTP/1.1. There, it is inherently because of the protocol: A chunked-transfer cannot be aborted by the client without tearing down the TCP connection. Upload was still correctly reused.

In h2 it works normally already. I still have to catch up with how QUIC is behaving here, but I think there is no inherent reason related to the protocol.

You can try to create a separate RoundTripper for upload and download, to see if GET interferes with the connection reuse of POST. This is how I debugged things in h1. If nobody does it I can take a look next week.

RPRX commented 1 month ago

I can take a look next week.

你吓我一跳,我看了下日期,原来今天是周日

反正目前“我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?” https://github.com/XTLS/Xray-core/issues/3560#issuecomment-2241001918

Fangliding commented 1 month ago

Maybe quic-go/http3 doesn't support it. I didn't implement stream myself. Reusing earlyConnection or UDPConn will result in an error ( or mux).

According to rfc 9114 section 4.1 only one request can be sent on each stream

A client sends an HTTP request on a request stream, which is a client-initiated bidirectional QUIC stream; see Section 6.1. A client MUST send only a single request on a given stream. A server sends zero or more interim HTTP responses on the same stream as the request, followed by a single final HTTP response

Machine translatior misinterpreted my word What I'm talking about is opening streams to reuse QUIC connection, not reuse QUIC stream

RPRX commented 1 month ago

SplitHTTP H3 也有 globalDialerMap,但比较奇怪的是 quic-go 的 http3 没自动复用连接,每次都 Dial,是哪里没设置好吗

调试了一下代码,发现不是 quic-go 的问题,多少有点搞笑,SplitHTTP 的 dialer.go 里有处:

    if isH3 {
        dest.Network = net.Network_UDP

导致最后存了之后:

    globalDialerMap[dialerConf{dest, streamSettings}] = client

下一次开头没能 found:

    if client, found := globalDialerMap[dialerConf{dest, streamSettings}]; found {
        return client
    }

不过全部复用一条 QUIC connection 不一定就好,所以我会先 commit,不急着发下一个版本,你们测一下速率有没有差异

Fangliding commented 1 month ago

原来是压根没找到client么( 当初这么写的原因是不这么写下面的dialer不知道应该返回udpConn(

RPRX commented 1 month ago

我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?

但我测出来 https://github.com/XTLS/Xray-core/commit/22535d86439952a9764d65119bcc739929492717 H3 的延迟还是比 H2 高 3/4,测试用 URL 是 HTTPS 是 2-RTT,准备再次从 WireShark 开始看一下

RPRX commented 1 month ago

我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?

但我测出来 22535d8 H3 的延迟还是比 H2 高 3/4,测试用 URL 是 HTTPS 是 2-RTT,~准备再次从 WireShark 开始看一下~

好像是发送内层 Client Hello 前有一次往返,总之你们都测下延迟看看 WireShark 吧,我先睡觉了

Fangliding commented 1 month ago

我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?

但我测出来 22535d8 H3 的延迟还是比 H2 高 3/4,测试用 URL 是 HTTPS 是 2-RTT,~准备再次从 WireShark 开始看一下~

好像是发送内层 Client Hello 前有一次往返,总之你们都测下延迟看看 WireShark 吧,~我先睡觉了~

分析了一下抓包结果 我发现这个http3的请求似乎是阻塞的 splithttp需要GET和POST两个请求才能建立连接 现在的行为是先GET后POST 在使用h2的情况下 这两个请求被同时发出 但是在使用h3的情况下 当服务端返回了200ok后 客户端才会发起POST请求 等于多了一个RTT 造成了额外延迟

Fangliding commented 1 month ago

下面的wireshark截图

这里是h2 GET和POST被同时发送 image

这里是h3 GET发出后收到服务器的200 OK 才发起POST请求 image

Fangliding commented 1 month ago

很神奇 我以为是h3 client的问题 但是我尝试用两个client处理请求 结果在两个QUIC connection里 一个里的POST还是会等到另一个GET请求发出再发出 跟他们有心灵感应一样(

mmmray commented 1 month ago

So is it maybe the server that enforces this synchronization?

Fangliding commented 1 month ago

So is it maybe the server that enforces this synchronization?

It is obvious taht the time of the request being send is controlled by the local client

Fangliding commented 1 month ago

很神奇 我以为是h3 client的问题 但是我尝试用两个client处理请求 结果在两个QUIC connection里 一个里的POST还是会等到另一个GET请求发出再发出 跟他们有心灵感应一样(

哪怕是把上下行其中一个替换为h2 仍然有这个行为((

RPRX commented 1 month ago

分析了一下抓包结果 我发现这个http3的请求似乎是阻塞的 splithttp需要GET和POST两个请求才能建立连接 现在的行为是先GET后POST 在使用h2的情况下 这两个请求被同时发出 但是在使用h3的情况下 当服务端返回了200ok后 客户端才会发起POST请求 等于多了一个RTT 造成了额外延迟

又调试了一下代码,过程不表,发现是 SplitHTTP client.go OpenDownload 函数里这一段的问题:

        trace := &httptrace.ClientTrace{
            GotConn: func(connInfo httptrace.GotConnInfo) {
                remoteAddr = connInfo.Conn.RemoteAddr()
                localAddr = connInfo.Conn.LocalAddr()
                gotConn.Close()
            },
        }

H2 时,除了第一次,都会立即回调 GotConn 从而 gotConn.Close(),使 OpenDownload 函数和 dialer.go 的 Dial 函数立即返回

H3 时,GotConn 从未被回调过,导致 c.download.Do(req) 后 OpenDownload 函数才返回,并且没拿到 remoteAddr 和 localAddr


quic-go 尚未支持 httptrace:https://github.com/quic-go/quic-go/issues/3342


既然现在 H3 时没拿到 remoteAddr 和 localAddr,先改成直接 gotConn.Close() 避免阻塞,至于拿地址,@mmmray 再研究吧

RPRX commented 1 month ago

To ⑥:SplitHTTP 当然能用于反向代理,只是反过来的话速率会比较感人

RPRX commented 1 month ago

To ⑥:SplitHTTP 当然支持 acceptProxyProtocol,只是你放 CDN 后面时有 X-Forwarded-For,它优先级更高


UDP 的话:https://github.com/pires/go-proxyproto/issues/88

mmmray commented 1 month ago

Thanks for investigating. Is this the main reason for the slowness, or is there some other synchronization between POST requests as well? Does maxConcurrentUploads work correctly?

I'm trying to remember what I considered on the first splithttp PR, instead of using httptrace. I believe all options were terrible, and it just got more complicated when I tried to gradually eliminate RTT.

inside of the dialer, we already have access to the raw connection. So one could pass it upwards by setting it on the DialerClient type.

However, it will only be called once, and there is no guarantee that it actually corresponds to the IP address used by the HTTP request. I think it's better to log nothing than to log something that could be wrong.

By the way, in your fix one can use if c.isH3 instead of this casting of RoundTripper.

RPRX commented 1 month ago

By the way, in your fix one can use if c.isH3 instead of this casting of RoundTripper.

准备 force-push

RPRX commented 1 month ago

还有关于 SplitHTTP 的 dialerProxy 修起来也不难,给 type connection struct 实现 ReadFrom 和 WriteTo(Buffer.UDP),然后给 local 填个随机值骗一下 quic-go 就行,参考 https://github.com/XTLS/Xray-core/issues/3560#issuecomment-2240871481 https://github.com/XTLS/Xray-core/issues/3560#issuecomment-2240883881 https://github.com/XTLS/Xray-core/issues/3556#issuecomment-2241031881 @mmmray

RPRX commented 1 month ago

为这两天的 fixes 发了个新版,不过我不确定全部复用一条 QUIC connection 是否会影响测速,你们测一下 v1.8.21 和 v1.8.20

H3 新开个 stream 至少没有 H2 的队头阻塞,可能只取决于运营商有没有“单条 UDP 限速”,以后可以设 https://github.com/XTLS/Xray-core/pull/3412#issuecomment-2162056344

RPRX commented 1 month ago

Thanks for investigating. Is this the main reason for the slowness, or is there some other synchronization between POST requests as well? Does maxConcurrentUploads work correctly?

我看代码时顺便看到的问题是,这 10 个 uploads 可能是同时发出的,并且可能是一个 1MB,九个没什么数据,不知道你能否理解

所以新版的 release notes 正好把上个版本的“未复用连接”改成了“上行待优化”,正好昨天还有人说 release notes 没提

我觉得改成设置一个固定的间隔上传数据更可控且更稳定,比如 100ms 或 10ms,@mmmray 你试一下

mmmray commented 1 month ago

There may be an issue where, if some application emits very small Write, they may immediately manifest as small uploads, and only once there is a backlog, large writes are emitted.

The docs imply Nagle's algorithm is implemented but it's not actually the case. Maybe this needs to be done: Wait 10ms until at least N bytes arrive, otherwise flush out whatever is in uploadPipeReader

RPRX commented 1 month ago

There may be an issue where, if some application emits very small Write, they may immediately manifest as small uploads, and only once there is a backlog, large writes are emitted.

The docs imply Nagle's algorithm is implemented but it's not actually the case. Maybe this needs to be done: Wait 10ms until at least N bytes arrive, otherwise flush out whatever is in uploadPipeReader

其实几十毫秒又不是等不起,我觉得至少比现在的下一轮 uploads 可能要等一整个 RTT 强,当然你说的机制也可以有

不过固定多少毫秒上传一次会有特征,时间上需要随机一小下,虽然取平均还是差不多,不过复用 QUIC connection 可以缓解特征

我看群友测出运营商的确有“单条 UDP 限速”,所以下个版本要在 URL 参数中提供一些控制选项,这个多少毫秒上传一次同样也是

写在 URL 参数中是为了 GUI 好做,不然 GUI 出于未更新或偷懒等原因不加这些控制选项(常态)的话就没什么意义

RPRX commented 1 month ago

先提供六个控制选项吧,希望 @mmmray 写一下

第五个就相当于 Xray Mux 的 concurrency

Fangliding commented 1 month ago

为这两天的 fixes 发了个新版,不过我不确定全部复用一条 QUIC connection 是否会影响测速,你们测一下 v1.8.21 和 v1.8.20

H3 新开个 stream 至少没有 H2 的队头阻塞,可能只取决于运营商有没有“单条 UDP 限速”,以后可以设 #3412 (comment)

hy就是一条用到爆(

RPRX commented 1 month ago

hy就是一条用到爆(

可能运营商的“单条 UDP 限速”只防君子不防 Brutal,但总限速处于更高层级,无论如何也不会让你突破的,不然办个小水管就行了

0ldm0s commented 1 month ago

第一次提bug,感觉应该是同一个问题。使用的版本是1.8.21,操作系统是macos m1,编译用的go版本为go1.22.5 darwin/arm64。能正常使用,上下行速率能达到服务器满载,但会不定时的出现这个错误。 服务端用haproxy直接用acl指向xray,在haproxy做了quic v4v6+0rtt。如果还需要什么日志,我可以尽量提供。

goroutine 249173 [running]: github.com/quic-go/quic-go/http3.(hijackableBody).requestDone(...) github.com/quic-go/quic-go@v0.45.1/http3/body.go:124 github.com/quic-go/quic-go/http3.(hijackableBody).Read(0x14000ac8090, {0x1400065a000?, 0x14000a9e060?, 0x14000d29bb0?}) github.com/quic-go/quic-go@v0.45.1/http3/body.go:114 +0x58 github.com/xtls/xray-core/transport/internet/splithttp.(LazyReader).Read(0x14000a9e060?, {0x1400065a000, 0x2000, 0x2000}) github.com/xtls/xray-core/transport/internet/splithttp/lazy_reader.go:43 +0x5c github.com/xtls/xray-core/transport/internet/splithttp.(LazyReader).Read(0x14000d0c820?, {0x1400065a000, 0x2000, 0x2000}) github.com/xtls/xray-core/transport/internet/splithttp/lazy_reader.go:43 +0x5c github.com/xtls/xray-core/transport/internet/splithttp.(splitConn).Read(0x0?, {0x1400065a000?, 0x14000d29c78?, 0x1024d2f94?}) github.com/xtls/xray-core/transport/internet/splithttp/connection.go:21 +0x2c github.com/xtls/xray-core/common/buf.(Buffer).ReadFrom(...) github.com/xtls/xray-core/common/buf/buffer.go:280 github.com/xtls/xray-core/common/buf.ReadBuffer({0x128d657b0, 0x14000560c00}) github.com/xtls/xray-core/common/buf/reader.go:30 +0x6c github.com/xtls/xray-core/common/buf.(SingleReader).ReadMultiBuffer(0x1?) github.com/xtls/xray-core/common/buf/reader.go:158 +0x28 github.com/xtls/xray-core/common/buf.copyInternal({0x10321d0e0, 0x14000d31020}, {0x10321ad18, 0x140015b7090}, 0x14001429f38) github.com/xtls/xray-core/common/buf/copy.go:93 +0x48 github.com/xtls/xray-core/common/buf.Copy({0x10321d0e0, 0x14000d31020}, {0x10321ad18, 0x140015b7090}, {0x14000d29f00, 0x1, 0x14000e61618?}) github.com/xtls/xray-core/common/buf/copy.go:116 +0x90 github.com/xtls/xray-core/proxy/vless/outbound.(Handler).Process.func4() github.com/xtls/xray-core/proxy/vless/outbound/outbound.go:280 +0x4b4 github.com/xtls/xray-core/proxy/vless/outbound.(*Handler).Process.OnSuccess.func7() github.com/xtls/xray-core/common/task/task.go:12 +0x30 github.com/xtls/xray-core/common/task.Run.func1(0x14001a96380?) github.com/xtls/xray-core/common/task/task.go:28 +0x34 created by github.com/xtls/xray-core/common/task.Run in goroutine 249167 github.com/xtls/xray-core/common/task/task.go:27 +0xd0

mmmray commented 1 month ago

This (and many other threads about splithttp recently) are about too many things at once. I think your bug is not directly related to the dialerProxy issue. I suggest to file a new issue.

0ldm0s commented 1 month ago

开了debug之后,截到一个疑似是ipv6引发的问题。我在服务器上禁用一下ipv6试试。

2024/07/24 00:09:40 [Info] transport/internet/splithttp: failed to send download http request > Get "https://vps域名/路径/uuid": INTERNAL_ERROR (local): write udp [::]:50017->38.147.189.176:443: sendmsg: invalid argument 2024/07/24 00:09:40 [Info] app/proxyman/outbound: app/proxyman/outbound: failed to process outbound traffic > proxy/vless/outbound: connection ends > proxy/vless/outbound: failed to decode response header > proxy/vless/encoding: failed to read response version > transport/internet/splithttp: failed to read initial response > transport/internet/splithttp: downResponse failed 2024/07/24 00:09:40 [Info] app/proxyman/outbound: app/proxyman/outbound: failed to process outbound traffic > proxy/vless/outbound: connection ends > proxy/vless/outbound: failed to decode response header > proxy/vless/encoding: failed to read response version > transport/internet/splithttp: failed to read initial response > transport/internet/splithttp: downResponse failed 2024/07/24 00:09:40 [Info] app/proxyman/outbound: app/proxyman/outbound: failed to process outbound traffic > proxy/vless/outbound: connection ends > proxy/vless/outbound: failed to decode response header > proxy/vless/encoding: failed to read response version > transport/internet/splithttp: failed to read initial response > transport/internet/splithttp: downResponse failed 2024/07/24 00:09:40 [Info] transport/internet/splithttp: failed to send upload > Post "https://vps域名/路径/uuid/0": INTERNAL_ERROR (local): write udp [::]:50017->38.147.189.176:443: sendmsg: invalid argument 2024/07/24 00:09:40 [Info] transport/internet/splithttp: failed to send upload > Post "https://vps域名/路径/uuid/0": INTERNAL_ERROR (local): write udp [::]:50017->38.147.189.176:443: sendmsg: invalid argument

RPRX commented 1 month ago
* 多路复用模式,三选一:
* 复用多少次开新连接
* 同时最多多少子连接
* 同时总共多少条连接

想了一下,如下设计 SplitHTTP H2 H3 的多路复用控制更合适

首先基础模式二选一:

其次两个限制维度,可以同时生效:

最后,以上选项均填范围,Xray 每次随机选择一个确定值,以淡化潜在特征

这样的话原版的第一个“复用多少次开新连接”也能组合出实现,并且提供了更多可能,欢迎大家提出建议,不然就这么实现了

目前还想到可以在开新流时选择哪条连接的“算法”上做文章,不过还没想好,可以以后再 patch

Fangliding commented 1 month ago

就隔壁那套 max_connections/min_streams/max_streams 感觉就够了 连ray自己的mux甚至都只有一个最大子连接的选项(((

PoneyClairDeLune commented 1 month ago

Dynamic concurrency scaling based on send rate.

RPRX commented 1 month ago

就隔壁那套 max_connections/min_streams/max_streams 感觉就够了 连ray自己的mux甚至都只有一个最大子连接的选项(((

我们在为自己的需求设计新的多路复用控制,你给我说抄隔壁,What's your problem?不好意思点了个 down,无个人恩怨

这套机制完善后会给 Xray Mux 也加上,此外 Xray Mux 早就修好了 v2 的遗留问题,群里还有人说,这名声是不是甩不掉了

RPRX commented 1 month ago

Dynamic concurrency scaling based on send rate.

想了一下,最适合的是作为第三个层级,也就是作为“开新流时选择哪条连接的‘算法’”加上,这样可以组合出更多可能

RPRX commented 1 month ago

鉴于这个 issue 主要在讨论 h3 multiplex,我修改一下标题,就像上个 issue https://github.com/XTLS/Xray-core/issues/3556 也是为了 dialerProxy 而开结果修了 NG 一样

关于 dialerProxy 的讨论可以在 PR https://github.com/XTLS/Xray-core/pull/3570 中继续

RPRX commented 1 month ago

@mmmray 你有时间实现它们吗,如果没时间我可以写一下

mmmray commented 1 month ago

The first two bullet points make sense to me, however I think setting "min connections" and "max connections" feels more natural than choosing a "mode" (I think they are equivalent anyway?)

So it is min_connections/max_connections/max_streams, it feels more consistent with existing mux settings (also from other cores), and I think it covers the same usecases as the "mode".

Then there are two more options:

一个连接最多被累计复用多少次,默认 0 为不限制 (The maximum number of times a connection can be reused. The default value is 0, which means no limit.) 一个连接最大存活时间,默认 0 为不限制 (The maximum survival time of a connection, the default is 0 for no limit)

Because of point 2, I don't see an urgent need for it right now, of course if you have the motivation to do it, it's good, it's one step further ahead. I'm a bit too overwhelmed with other tasks right now.

I will focus my efforts on improving upload bandwidth, since it's the biggest complaint. I think it's unrelated and won't overlap with your efforts, but not 100% sure. It's stilll not clear to me when I will actually find the time to properly focus though...

And yeah, uquic and the dialerProxy issues are also on my list, maybe dialerProxy is not important but I just don't want to keep the PR hanging around (also due to some private conversations where people tried to use it for whatever reason)

RPRX commented 1 month ago

@mmmray min_connections 并不能涵盖那些用例,亦不在 other cores 里,它更像是以前设计 switch 时讨论过的“预连接”,即在代理流量前预先开连接放着,以消除任何协议的用户感知 RTT,不过需要空跑一段流量否则会有特征,所以暂时没加,以后再说

"number of times",我看你的英文有个“累计”没被翻译出来,看下这个:子连接数(累计) "number of bytes sent/rcvd" 是不错的建议,可以分别作为两个限制维度加上

I understand this is for eliminating long-running connections as a feature. Generally, if the need is censorship-resistance, I think that we should wait until it gets blocked.

从体感上来说,过长时间的连接有时只是没有 gets blocked,但是会变得不稳定,所以这是个挺实用的限制维度,要填单位 m h 等

I will focus my efforts on improving upload bandwidth, since it's the biggest complaint. I think it's unrelated and won't overlap with your efforts, but not 100% sure. It's stilll not clear to me when I will actually find the time to properly focus though...

其实我觉得优化上传是最简单的事情,实现我说的那套“每隔多少毫秒上传一次(范围)”几乎不用花时间,所以我都没在讨论它了

RPRX commented 1 month ago

To ⑥:UDP 肯定能迁移,因为那是 XUDP 自带迁移,TCP 的话取决于是不是连上了 CDN 的同一个节点,切换网络的话就很难说

不过 SplitHTTP 每条连接对应一个 UUID,其实给下行也加上 序号、缓存、ACK 就能实现内层 TCP 流量的迁移,有成本

XUDP 的迁移无成本是因为反正那是 UDP,扔了就扔了,断开连接期间两端有缓存但不多,还不是 XUDP 而是 Xray 机制缓存的