v2ray / v2ray-core

A platform for building proxies to bypass network restrictions.
https://www.v2ray.com/
MIT License
45.32k stars 8.94k forks source link

断流比较严重,io: read/write on closed pipe和connection ends > context canceled 复现频繁 #748

Closed dexcomman closed 6 years ago

dexcomman commented 6 years ago

1) 你正在使用哪个版本的 V2Ray?(如果服务器和客户端使用了不同版本,请注明) 均为2.51.2 2) 你的使用场景是什么?比如使用 Chrome 通过 Socks/VMess 代理观看 YouTube 视频。 proxifier 全局socks5代理游戏和chrome浏览网页,安卓端tumblr

3) 你看到的不正常的现象是什么?(请描述具体现象,比如访问超时,TLS 证书错误等) 断流现象频繁,似乎是触发了某个地方的死锁导致链接间歇中断,但是应用所有断流issue中方法均无改善(如client端配置dnsdetour端口,服务端配置dns,修改streamSettings,调buffer,降低mtu和发包频率,调高alterid,改chacha20和cfb,设置超时等)chrome浏览网页突然顿住打不开,网游突然中断连接,tumblr视频卡很久但是图片很顺畅。经测试 多家直连线路多ip均可复现。理论上没有qos,中断连接后restart一下客户端马上可以连接,也能立马跑满速。tumblr 视频卡住复现率100%,但是youtube却正常:(

4) 你期待看到的正确表现是怎样的? 稳定连接不中断,同vps ssh连接非常稳定,同时其他人启用kcp似乎也没有我这么严重

5) 请附上你的配置(提交 Issue 前请隐藏服务器端IP地址)。

服务器端配置:

{
  "log": {                                  
    "access": "/var/log/v2ray/access.log",  
    "error": "/var/log/v2ray/error.log",    
    "loglevel": "debug"                   
  },

  "inbound": {
    "port": 777,
    "protocol": "vmess",
    "listen": "0.0.0.0",
    "settings": {
      "clients": [
        {
          "id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
          "level": 1,
          "alterId": 100
        }
      ]
    },
      "streamSettings":{
      "network":"kcp"
    }
  },
  "outbound": {
    "protocol": "freedom",
    "settings": {}
  },
  "transport": {
  "kcpSettings": {
        "mtu": 1460,
        "tti": 10,
        "uplinkCapacity": 5,
        "downlinkCapacity": 100,
        "congestion": false,
        "readBufferSize": 5,
        "writeBufferSize": 5,
        "header": {
          "type": "none"
        }
      }
     }
}
客户端配置:
   {  
  "inbound": {
    "port": 1080,
    "listen": "127.0.0.1",
    "protocol": "socks",
    "settings": {
      "auth": "noauth",
      "udp": true,
      "ip": "127.0.0.1"
      }
  },
  "outbound": {
    "protocol": "vmess",
    "settings": {
      "vnext": [
        {
          "address": "0.0.0.0",
          "port": 777,
          "users": [
            {
              "id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
              "alterId": 100,
              "security": "aes-128-gcm"
            }
          ]
        }
      ]
    },
      "streamSettings":{
      "network":"kcp"
    },
        "mux": {
      "enabled": true
    }
  },
  "transport": {
  "kcpSettings": {
        "mtu": 1460,
        "tti": 10,
        "uplinkCapacity": 100,
        "downlinkCapacity": 100,
        "congestion": false,
        "readBufferSize": 5,
        "writeBufferSize": 5,
        "header": {
          "type": "none"
        }
      }
     }
}

6) 请附上出错时软件输出的错误日志。在 Linux 中,日志通常在 /var/log/v2ray/error.log 文件中。

服务器端错误日志:
  2017/11/29 22:34:33 [Info]App|Proxyman|Outbound: failed to process outbound traffic > Proxy|Freedom: connection ends > context canceled
2017/11/29 22:34:33 [Info]App|Proxyman|Outbound: failed to process outbound traffic > Proxy|Freedom: connection ends > context canceled
2017/11/29 22:34:33 [Info]App|Proxyman|Outbound: failed to process outbound traffic > Proxy|Freedom: connection ends > context canceled
2017/11/29 22:34:33 [Info]App|Proxyman|Outbound: failed to process outbound traffic > Proxy|Freedom: connection ends > context canceled

2017/11/29 22:36:49 [Info]Transport|Internet|TCP: dailing TCP to tcp:vtt.tumblr.com:443
2017/11/29 22:36:50 [Info]App|Proxyman|Outbound: failed to process outbound traffic > Proxy|Freedom: connection ends > Proxy|Freedom: failed to process response > io: read/write on closed pipe
2017/11/29 22:36:55 [Debug]Transport|Internet|mKCP: #17454 entering state 4 at 241284
2017/11/29 22:36:55 [Info]App|Proxyman|Inbound: connection ends > Proxy|VMess|Inbound: connection ends > io: read/write on closed pipe
客户端错误日志:
    [Warning]failed to handler mux client connection > Proxy|VMess|Outbound: connection ends > context canceled

7) 请附上访问日志。在 Linux 中,日志通常在 /var/log/v2ray/error.log 文件中(已打码)

  2017/11/29 22:32:47 0.123.0.29:44897 rejected  Proxy|VMess|Encoding: failed to read request header > Transport|Internet|mKCP: Read/Write timeout
2017/11/29 22:32:47 0.123.03:47327 accepted tcp:v1.mux.cool:9527 
2017/11/29 22:32:54 0.123.0.29:45860 accepted tcp:v1.mux.cool:9527 
DarienRaymond commented 6 years ago

错误信息没什么关联。

  1. 你的VPS是多人共享的?
  2. 看一下 systemd 的日志,比如journalctl - u v2ray,看看有没有重启或者panic过。
dexcomman commented 6 years ago

@DarienRaymond

  1. 只有我自己一个人用,内存1.5G 只跑v2ray,双向cn2线路,基本可以上排除硬件问题,全新安装的debian8理论上也不会有问题
  1. _Nov 30 03:05:56 denladen.com systemd[1]: Starting V2Ray Service... Nov 30 03:05:56 denladen.com systemd[1]: Started V2Ray Service. Nov 30 03:05:57 denladen.com v2ray[472]: V2Ray v2.51.2 (One for all) 20171126 Nov 30 03:05:57 denladen.com v2ray[472]: An unified platform for anti-censorshi Nov 30 03:05:59 denladen.com v2ray[472]: 2017/11/30 03:05:59 [Debug]App|Proxyma Nov 30 03:05:59 denladen.com v2ray[472]: 2017/11/30 03:05:59 [Warning]Core: V2R_

systemd日志确实没有有价值的,用一年多好像从来没panic过也没自动重启过,只有我偶尔改配置漏括号时候提示无法读取js文件的提示......我也很头疼不知道哪里出问题了,win的client端和linux的服务端只有close pipecontext cancled ,connection ends > EOF,没有别的

3.proxifier也是官网下载的 :(

4.查了下daemon log都是我自己restart的或者我手动重启vps,连接记录也都是我的ip,

其实这个问题很久了,我也实在找不出哪里有问题,之前自己捣鼓各种参数对照所有issue调仍然无果(对,所有),实在无奈只好来求助

dexcomman commented 6 years ago

[Warning]failed to handler mux client connection > Proxy|VMess|Outbound: connection ends > EOF

[Warning]failed to handler mux client connection > Proxy|VMess|Outbound: connection ends > io: read/write on closed pipe

好像win的client端出现这个时候就是断流。 是不是应该把日志开久一点才会有更多信息呢

Greatsaltedfish commented 6 years ago

我也遇到同样的问题唉,我觉得是mKcp的问题。

DarienRaymond commented 6 years ago

如果不难重现的话可以把 loglevel 设为 debug 看一下。

这种情况一般发生在什么时候?比如restart半小时之这样,有没有一个明显的时间规律?

dexcomman commented 6 years ago

@DarienRaymond

  1. 贴出log即为debug的日志,其余重复的我就没贴上来,基本就是这三个出现点导致断流,客户端和服务端均为此

  2. 断流毫无规律,且与流量大小和服务端内存占用无关,我拼命跑到内存占400+m时候仍然稳定。

  3. 由于浏览网页有时候会难以发现断流,只是顿一下然后马上自动恢复重连,不过大多时候是卡住连接,可能很长时间都很稳定,也可能几分钟内就需要restart一下client

  4. 昨晚开始排除各个故障把火x卸载,清理系统垃圾,并再三确认本地无1080端口占用,更换proxifier为sockscap64 测试仍有断流现象,client端只显示 failed to handler mux client connection > Proxy|VMess|Outbound: connection ends > context canceled

ps:有的issue讲可能mkcp的问题,我感觉应该不是,因为2.1或者之前的版本未出现此问题,还有加密算法有提到更换别的感觉不是很靠谱

如果有需要,我把所有日志上传网盘以供参考,我等鶸实在不会分析
dexcomman commented 6 years ago

@Greatsaltedfish

可否共享一下其他信息呢?你也只是断流?

DarienRaymond commented 6 years ago

暂时的解决方案是把 mux 关了,一来你用mkcp,mux起不到什么作用;二来可以帮助判断问题出在哪。

Greatsaltedfish commented 6 years ago

这是我的服务器日志:

017/12/01 20:53:37 [Warning]App|Proxyman|Inbound: connection ends > Proxy|VMess|Inbound: unable to set read deadline > Transport|Internet|mKCP: Connection closed.
2017/12/01 20:54:22 [Warning]App|Proxyman|Inbound: connection ends > Proxy|VMess|Inbound: unable to set read deadline > Transport|Internet|mKCP: Connection closed.
2017/12/01 21:50:54 [Warning]App|Proxyman|Inbound: connection ends > Proxy|VMess|Inbound: unable to set read deadline > Transport|Internet|mKCP: Connection closed.
2017/12/01 21:53:07 [Warning]App|Proxyman|Inbound: connection ends > Proxy|VMess|Inbound: unable to set read deadline > Transport|Internet|mKCP: Connection closed.

linux客户端日志:

2017/12/01 20:53:37 [Warning]App|Proxyman|Inbound: connection ends > Proxy|VMess|Inbound: unable to set read deadline > Transport|Internet|mKCP: Connection closed.
2017/12/01 20:54:22 [Warning]App|Proxyman|Inbound: connection ends > Proxy|VMess|Inbound: unable to set read deadline > Transport|Internet|mKCP: Connection closed.
2017/12/01 21:50:54 [Warning]App|Proxyman|Inbound: connection ends > Proxy|VMess|Inbound: unable to set read deadline > Transport|Internet|mKCP: Connection closed.
2017/12/01 21:53:07 [Warning]App|Proxyman|Inbound: connection ends > Proxy|VMess|Inbound: unable to set read deadline > Transport|Internet|mKCP: Connection closed.
Greatsaltedfish commented 6 years ago

服务器配置文件:

{
    "port": 9443,
    "log": {
        "access": "/var/log/v2ray/access.log",
        "error": "/var/log/v2ray/error.log",
        "loglevel": "warning"
    },
    "inbound": {
        "protocol": "vmess",
        "settings": {
            "clients": [{
                "id": "xxxxxxxxxx",
                "alterId": 68,
                "level": 0
            }]
        },
  "streamSettings":{
   "network":"kcp",
    "kcpSettings":{
         "mtu": 1400,
       "tti": 20,
  "uplinkCapacity": 100,
  "downlinkCapacity": 200,
  "congestion": true,
  "readBufferSize": 4,
  "writeBufferSize": 4,
  "header": {
    "type": "srtp"
  }

       },
     "security": "tls",
        "tlsSettings": {
           "certificates": [
           {
  "certificateFile": "/etc/v2ray/v2ray.crt", 
  "keyFile": "/etc/v2ray/v2ray.key" 
 }
   ]
}
}
    },
    "outbound": {
        "protocol": "freedom",
        "settings": {}
    },
    "inboundDetour": [{
        "protocol": "shadowsocks",
        "port": 7966,
        "settings": {
            "method": "chacha20-ietf",
            "password": "xxxxx",
            "udp": false
        }
    }],
    "outboundDetour": [{
        "protocol": "blackhole",
        "settings": {},
        "tag": "blocked"
    }],
    "routing": {
        "strategy": "rules",
        "settings": {
            "rules": [{
                "type": "field",
                "ip": [
                    "0.0.0.0/8",
                    "10.0.0.0/8",
                    "100.64.0.0/10",
                    "127.0.0.0/8",
                    "169.254.0.0/16",
                    "172.16.0.0/12",
                    "192.0.0.0/24",
                    "192.0.2.0/24",
                    "192.168.0.0/16",
                    "198.18.0.0/15",
                    "198.51.100.0/24",
                    "203.0.113.0/24",
                    "::1/128",
                    "fc00::/7",
                    "fe80::/10"
                ],
                "outboundTag": "blocked"
            }]
        }
    }
}

客户端配置文件:

{
  "log": {
     "access":"var/log/v2ray/access.log",
     "error":"/var/log/v2ray/error.log",
    "loglevel": "debug"
  },
  "inbound": {
    "port": 4080,
    "listen": "127.0.0.1",
    "protocol": "socks",
    "settings": {
      "auth": "noauth",
      "udp": true,
      "ip": "127.0.0.1"
    }
  },
  "outbound": {
    "protocol": "vmess",
    "settings": {
      "vnext": [
        {
          "address": "xxxxx",
          "port":9443,
          "users": [
            {
              "id": "xxxxxxxxx",
              "alterId": 68,
              "security": "auto"
            }
          ]
        }
      ]
    },"streamSettings":{
   "network":"kcp",
      "kcpSettings": {
        "mtu": 1400,
        "tti": 20,
        "uplinkCapacity": 10,
        "downlinkCapacity": 100,
        "congestion": true,
        "readBufferSize": 4,
        "writeBufferSize": 4,
        "header": {
          "type": "srtp"
        }
      },
     "security": "tls"
},
  "mux": {
      "enabled": false,
      "concurrency":8
    }
  },"inboundDetour":[
      {
        "protocol": "dokodemo-door",  // 一个额外的 dokodemo-door 入口协议
        "port": 3080,            // 本地端口
        "settings": {
          "address": "xxxxxxxx",   // 远程主机的 IP
          "port":33080 ,           // 远程主机被影射的端口
          "network": "tcp,udp",       // 网络协议,支持"tcp"、"udp"和"tcp,udp"
          "timeout": 0            // 传输超时(秒),0 表示不检测超时
        }
      }

],
  "outboundDetour": [
    {
      "protocol": "freedom",
      "settings": {},
      "tag": "direct"
    },{
      "protocol": "blackhole",
      "tag": "block_request",
      "settings": {
        "response": {
          "type": "http"  // 自动回复 HTTP 403
        }
      }
    }
  ],
  "dns": {
    "servers": [
     "123.207.64.65",
     "8.8.8.8",
      "8.8.4.4",
       "localhost"
    ]
  },
  "routing": {
    "strategy": "rules",
    "settings": {
      "domainStrategy": "IPIfNonMatch",
      "rules": [
       {
         "type":"field",
         "domain":["e.baidu.com","spcode.baidu.com","tk.baidu.com","union.baidu.com","ucstat.baidu.com","msg.71.am","17un.co","union.baidu.com","cb.baidu.com","a.baidu.com","baidutv.baidu.com","bar.baidu.com","c.baidu.com","nsclick.baidu.com"],
           "outboundTag": "block_request"
       },
        {
          "type": "field",
          "port": "1-52",
          "outboundTag": "direct"
        },
        {
          "type": "field",
          "port": "54-79",
          "outboundTag": "direct"
        },
        {
          "type": "field",
          "port": "81-442",
          "outboundTag": "direct"
        },
        {
          "type": "field",
          "port": "444-65535",
          "outboundTag": "direct"
        },
        {
          "type": "chinasites",
          "outboundTag": "direct"
        },
        {
          "type": "field",
          "ip": [
            "0.0.0.0/8",
            "10.0.0.0/8",
            "100.64.0.0/10",
            "127.0.0.0/8",
            "169.254.0.0/16",
            "172.16.0.0/12",
            "192.0.0.0/24",
            "192.0.2.0/24",
            "192.168.0.0/16",
            "198.18.0.0/15",
            "198.51.100.0/24",
            "203.0.113.0/24",
            "::1/128",
            "fc00::/7",
            "fe80::/10"
          ],
          "outboundTag": "direct"
        },{
          "type": "chinaip",
          "outboundTag": "direct"
        }

      ]
    }
  }
}

我的情况就是客户端打开网页非常缓慢,超时情况严重。但是打开开始流量很大,然后瞬间降下来。

dexcomman commented 6 years ago

@DarienRaymond

理论上mux不对kcp起作用,但是我开启之后并发性明显增强,试了好多次。。。。it works lol

不过还是关闭一下再试试,我会及时更新

dexcomman commented 6 years ago

@Greatsaltedfish

  1. 我不太清楚你的网络,但是我不加混淆原生协议最快,你可以尝试一下。
  2. 而且为了防止路由部分出问题我都删除了,你要不要试一下
  3. 日志上你是mkcp连接中断。。。似乎还是比较明确啊
Greatsaltedfish commented 6 years ago

@dexcomman 这是我的服务器debug的日志,能说下kcp的状态是什么情况吗?谢谢

2017/12/02 11:35:29 [Info]App|Proxyman|Inbound: connection ends > Proxy|VMess|Inbound: connection ends > context canceled
2017/12/02 11:35:31 [Info]App|Proxyman|Outbound: failed to process outbound traffic > Proxy|Freedom: connection ends > context canceled
2017/12/02 11:35:31 [Info]App|Proxyman|Inbound: connection ends > Proxy|VMess|Inbound: connection ends > context canceled
2017/12/02 11:35:37 [Debug]Transport|Internet|mKCP: #37986 entering state 2 at 41119
2017/12/02 11:35:40 [Debug]Transport|Internet|mKCP: #37986 entering state 4 at 45001
2017/12/02 11:35:45 [Debug]Transport|Internet|mKCP: #37986 entering state 3 at 50003
2017/12/02 11:35:50 [Debug]Transport|Internet|mKCP: #37986 entering state 5 at 55002
2017/12/02 11:35:50 [Info]Transport|Internet|mKCP: terminating connection to 121.32.34.72:7837
2017/12/02 11:36:07 [Info]App|Proxyman|Inbound: connection ends > Proxy|VMess|Inbound: connection ends > context canceled
2017/12/02 11:36:07 [Info]App|Proxyman|Outbound: failed to process outbound traffic > Proxy|Freedom: connection ends > context canceled
dexcomman commented 6 years ago

@Greatsaltedfish

我半吊子的。。。。不知道理解的对不对。。。。一开始我看你mkcp中断连接可能是被qos了,因为我从来没遇到被动close的

不过,我查看日志也出现楼上你刚发的信息,这个state 5不懂啊,还有主动terminate不知道是不是缓存溢出问题,我再加大一下buffer试试,虽然很可能没什么用

@DarienRaymond

请问state 5 是否就是panic了呢?到了state5 下一条日志必定是主动断开连接

Dakai commented 6 years ago

从日志上看感觉和我最近出现的问题有点像,也是断流,后来发现应该是mKcp被宽带运营商断流的原因,原来伪装成BT流量也不行,今天换成了srtp伪装kcp流量已经几个小时了,没有断流现象,我是用的联通。

dexcomman commented 6 years ago

@Dakai

请问你原来的断流后是否能够很快跑满带宽?或者说有没有其他影响?

Dakai commented 6 years ago

@dexcomman 断流恢复之后应该可以跑满带宽,但是一断流就很痛苦,目前发现kcp伪装都不好用,几乎都会断流,tcp稳定些,但是好慢。

DarienRaymond commented 6 years ago

State=5 是 Terminated,就是连接完全断开了。这个状态之后立即断开连接并回收资源是正常的。上面的 2435 的状态顺序看上去没有问题,很正常的关闭顺序,基本等价于 TCP 断开时的四个状态。

dexcomman commented 6 years ago

我现在换回到2.20.1 ,十分稳定。。。。不过好像这个版本无法实现udp代理,还是说mkcp下无法实现游戏udp代理呢,其他应用完美了

DarienRaymond commented 6 years ago

2.20到现在,kcp基本没变过。我昨天小修了一下,希望能有所改善。

dexcomman commented 6 years ago

辛苦你们了,我也不知道是不是个例,测试下来还是之前的版本kcp比较稳定

Steve789 commented 6 years ago

vmess tcp也报错。客户端:[Warning]failed to handler mux client connection > Proxy|VMess|Outbound: connection ends > context canceled @DarienRaymond

leewi9 commented 6 years ago

@dexcomman

可能是本机时间和VPS的时间不同步,差多了就会有各种报错。

dexcomman commented 6 years ago

@leewi9

其实我也希望是这样.。。。我调的基本时差在2秒内

bitching commented 6 years ago

@DarienRaymond 何不参考一下udp2raw?曾经外出遇到过长宽网络,mKcp被QoS到生活不能自理,但TCP和icmp是正常的。。

dexcomman commented 6 years ago

其实可能是无解的,我测试的结果是无论tcp还是udp都很容易被盯上,或者说需要更新一下协议来对抗墙的不断升级?

Dakai commented 6 years ago

在经过多次试验之后还是放弃mKcp,换成tls+websocket+nginx之后就稳定了,速度也不差

Greatsaltedfish commented 6 years ago

以确认是vps机房网络问题,与v2ray没关系。谢谢各位的回复

2017-12-16 15:41 GMT+08:00 Dakai notifications@github.com:

在经过多次试验之后还是放弃mKcp,换成tls+websocket+nginx之后就稳定了,速度也不差

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/v2ray/v2ray-core/issues/748#issuecomment-352167512, or mute the thread https://github.com/notifications/unsubscribe-auth/AglC3tVwT5wV4XSu-NbZ-5Czx_ixKmV8ks5tA3QVgaJpZM4QwB5W .

dexcomman commented 6 years ago

@Greatsaltedfish 机房限制udp?

dexcomman commented 6 years ago

测试了许久 ,感觉锅应该在运营商,建议大家少用电信吧。。。联通好很多了 还是有偶尔的断流 先close了,暂时未发现其他新状况

ak47for commented 6 years ago

这个问题你们解决了吗?我用最新版依然还有这个现象,只是复现概率比较小,还是会出现 io: read/write on closed pipe 应该和运营商没关系

decemer commented 5 years ago

me too

jw-star commented 4 years ago

域名 缺少www导致的,已成功解决

pureblue007 commented 4 years ago

域名 缺少www导致的,已成功解决

确实是这样,我也加上了二级域名www就好了 版本4.21.3

bilibilistack commented 4 years ago

域名 缺少www导致的,已成功解决

确实是这样,我也加上了二级域名www就好了 版本4.21.3

host路径里面加上?还是说服务器一定要用二级域名解析?

lks6776 commented 4 years ago

域名 缺少www导致的,已成功解决

你好,想請教下怎麼解決的