MetaCubeX / mihomo

A simple Python Pydantic model for Honkai: Star Rail parsed data from the Mihomo API.
https://wiki.metacubex.one
MIT License
16.77k stars 2.66k forks source link

[Bug] TUN 1.18.6添加了一个新路由,使得入站包的回应都走了TUN接口 #1368

Open zeyugao opened 4 months ago

zeyugao commented 4 months ago

Verify steps

Operating System

Linux

System Version

Ubuntu 22.04

Mihomo Version

Mihomo Meta v1.18.6 linux amd64 with go1.22.4 Mon Jul 1 15:01:54 UTC 2024 Use tags: with_gvisor

Configuration File

find-process-mode: strict
mode: rule

tun:
  enable: true
  stack: system # gvisor/mixed
  dns-hijack:
    - any:53
  auto-detect-interface: true # 自动识别出口网卡
  auto-route: true # 配置路由表
  inet4-route-address: # 启用 auto-route 时使用自定义路由而不是默认路由
    - 0.0.0.0/1
    - 128.0.0.0/1
  inet6-address: null

sniffer:
  enable: false

# DNS 配置
dns:
  cache-algorithm: arc
  enable: true # 关闭将使用系统 DNS
  prefer-h3: true # 开启 DoH 支持 HTTP/3,将并发尝试
  listen: 0.0.0.0:53 # 开启 DNS 服务器监听
  default-nameserver:
    - 114.114.114.114
    - 8.8.8.8
    - tls://1.12.12.12:853
    - tls://223.5.5.5:853
  enhanced-mode: fake-ip # or redir-host

  fake-ip-range: 198.18.0.1/16 # fake-ip 池设置
  fake-ip-filter:
    - '*.lan'

  nameserver:
    - 114.114.114.114 # default value
    - 8.8.8.8 # default value
    - tls://223.5.5.5:853 # DNS over TLS
    - https://doh.pub/dns-query # DNS over HTTPS
    - https://dns.alidns.com/dns-query#h3=true # 强制 HTTP/3,与 perfer-h3 无关,强制开启 DoH 的 HTTP/3 支持,若不支持将无法使用
    - https://mozilla.cloudflare-dns.com/dns-query#DNS&h3=true # 指定策略组和使用 HTTP/3
    - quic://dns.adguard.com:784 # DNS over QUIC

  nameserver-policy:
    "geosite:cn,private,apple":
      - https://doh.pub/dns-query
      - https://dns.alidns.com/dns-query
    "geosite:category-ads-all": rcode://success
    "www.baidu.com,+.google.cn": [223.5.5.5, https://dns.alidns.com/dns-query]

Description

在1.18.6里面,添加一个额外的rule,使得入站流量的回包都走了虚拟网卡,导致无法外部入站

看1.18.6的release note,改了tun那一边的感觉与这个相关 https://github.com/MetaCubeX/mihomo/commit/09be5cbc99f97238aa95b9ceab9db39e53e1b3a9#diff-06e23ea20a066a0e717b5eaa625dfd3f1d11439f4ad5bd705d16d6e1758b39c0 ,但是根据里面新的config好像又没有关系

Reproduction Steps

使用1.18.6,在开起来之后,看路由的rule,里面比较相关的是

9000:   from all to 198.18.0.0/30 lookup 2022
9001:   from all lookup 2022 suppress_prefixlength 0
9002:   not from all dport 53 lookup main suppress_prefixlength 0
9002:   from all ipproto icmp goto 9010
9002:   from all iif Meta goto 9010
9003:   not from all iif lo lookup 2022
9003:   from 0.0.0.0 iif lo lookup 2022
9003:   from 198.18.0.0/30 iif lo lookup 2022
9010:   from all nop

但是里面的9001这个优先级的会让所有的都走Meta的device去了,导致入站连接,比如ssh的回应发送到Meta的网卡里面去,导致入站无法建立连接,如果手动加上一些源进源出的规则,跳过Meta的路由,就可以正常入站

ip rule add from xxx.xxx.xxx.xxx/24 goto 9010 priority 8998

在上一个1.18.5的版本里面,ip rule就没有 from all lookup 2022 suppress_prefixlength 0 这一条

9000:   from all to 198.18.0.0/30 lookup 2022
9001:   from all ipproto icmp goto 9010
9002:   not from all dport 53 lookup main suppress_prefixlength 0
9002:   not from all iif lo lookup 2022
9002:   from 0.0.0.0 iif lo lookup 2022
9002:   from 198.18.0.0/30 iif lo lookup 2022
9010:   from all nop

都是使用的相同的配置文件

Logs

No response

xishang0128 commented 4 months ago

inet4-route-address: # 启用 auto-route 时使用自定义路由而不是默认路由

ByteArray0 commented 4 months ago

inet4-route-address: # 启用 auto-route 时使用自定义路由而不是默认路由 - 0.0.0.0/1 - 128.0.0.0/1 去掉

我这里观察到一个现象,远程服务器上使用1.18.6版本时,设置任意 route-exclude-address ,如

  route-exclude-address:
    - 0.0.0.0/8
    - 10.0.0.0/8
    - 192.168.0.0/16
    - 172.16.0.0/12

也会导致SSH断开连接,去掉则不会。 1.18.5及以前的版本无此问题

wwqgtxx commented 3 months ago

试试 https://github.com/MetaCubeX/mihomo/commit/117cdd8b541de0e1048f3a573679ff8b61893797

huzheyi commented 2 months ago

我遇到了和你类似的问题。 我是在vyos上通过container host模式运行的mihomo 1.18.8

我的情况是vyos上配置了dnat映射,经过抓包发现 当从外部网络访问我映射的服务时

  1. 数据包先从pppoe0接口进入,从pppoe0接口抓包可以看到
15:50:28.099265 IP 43.226.237.69.32153 > 123.117.170.178.4433: Flags [S], seq 2965468837, win 64240, options [mss 1448,sackOK,TS val 70108068 ecr 0,nop,wscale 7], length 0
  1. 然后数据包根据dnat规则转发到内网,从br0接口抓包可以看到
15:50:28.099505 IP 43.226.237.69.32153 > 192.168.1.41.443: Flags [S], seq 2965468837, win 64240, options [mss 1448,sackOK,TS val 70108068 ecr 0,nop,wscale 7], length 0
  1. 随后内网服务器响应tcp请求发送ack,从br0接口抓包可以看到
15:50:28.099613 IP 192.168.1.41.443 > 43.226.237.69.32153: Flags [S.], seq 3703792511, ack 2965468838, win 31856, options [mss 1460,sackOK,TS val 1923171828 ecr 70108068,nop,wscale 7], length 0

4.之后这个数据包就进入了Meta接口,从Meta接口抓包可以看到

15:50:28.099742 IP 123.117.170.178.4433 > 43.226.237.69.32153: Flags [S.], seq 3703792511, ack 2965468838, win 31856, options [mss 1460,sackOK,TS val 1923171828 ecr 70108068,nop,wscale 7], length 0
  1. 正常此时这个数据包应该可以在pppoe0上可以看到,事实上却没有,同时外部网络的这台主机也接收不到任何回包

当我关闭mihomo时相关数据包可以在pppoe0和br0上被正常捕获,我确认这与我vyos的防火墙无关,因为关闭防火墙全通策略下也是这样的表现。

我不知道到4这一步时,正常的表现应该是:

  1. 内网服务器响应的数据包不进入Meta接口,而从pppoe0直接出去 还是: 2.内网服务器响应的数据包进入Meta接口,然后再从pppoe0出去

我的mihomo配置如下:

#port: 7890
#socks-port: 7891
mixed-port: 7890
#redir-port: 7892
#tproxy-port: 9898

allow-lan: true
bind-address: '*'

find-process-mode: strict

mode: rule

geox-url:
  geoip: "https://fastly.jsdelivr.net/gh/MetaCubeX/meta-rules-dat@release/geoip.dat"
  geosite: "https://fastly.jsdelivr.net/gh/MetaCubeX/meta-rules-dat@release/geosite.dat"
  mmdb: "https://fastly.jsdelivr.net/gh/MetaCubeX/meta-rules-dat@release/geoip.metadb"

# geodata-mode: true
geodata-loader: standard
geo-auto-update: true
geo-update-interval: 72

log-level: warning

ipv6: true

external-controller: 0.0.0.0:9090

tcp-concurrent: true

external-ui: /root/.config/mihomo/ui
external-ui-url: "https://github.com/MetaCubeX/metacubexd/archive/refs/heads/gh-pages.zip"

global-client-fingerprint: ios

profile:
  store-selected: true
  store-fake-ip: true

tun:
  enable: true
  stack: mixed
  dns-hijack:
    - 'any:53'
  auto-route: true
#  auto-redirect: true
  auto-detect-interface: true
  gso: true
  gso-max-size: 65536
  include-interface:
    - br0

sniffer:
  enable: true
  sniff:
    TLS:
      ports: [443, 8443]
    HTTP:
      ports: [80, 8080-8880]
      override-destination: true
    QUIC:
      ports: [443,8443]
  force-domain:
    - +.v2ex.com
  skip-domain:
     - Mijia Cloud

dns:
  cache-algorithm: arc
  enable: true
  prefer-h3: true
  listen: :5353
  ipv6: true

  default-nameserver:
    - 119.29.29.29
    - 223.5.5.5
    - system

  enhanced-mode: fake-ip
  fake-ip-range: 198.18.0.1/16
  # use-hosts: true

  respect-rules: false

  fake-ip-filter:
     - '*.lan'
     - '*.linksys.com'
     - '+.pool.ntp.org'
     - localhost.ptlogin2.qq.com
     - openpgpkey.kernel.org

  nameserver:
    - https://doh.pub/dns-query
    - https://dns.alidns.com/dns-query

  fallback:
    - https://1.1.1.1/dns-query
    - tls://1.0.0.1:853

  fallback-filter:
    geoip: true
    geoip-code: CN
    geosite:
      - gfw
    ipcidr:
      - 240.0.0.0/4
    domain:
      - '+.google.com'
      - '+.facebook.com'
      - '+.youtube.com'

  nameserver-policy:
    "geosite:private,cn,private,apple,microsoft@cn,category-games@cn":
      - https://doh.pub/dns-query
      - https://dns.alidns.com/dns-query

proxies:

...省略

rule-providers:
  bypass-source:
    type: file
    behavior: classical
    path: "bypass-source.yaml"

rules:

  - RULE-SET,bypass-source,DIRECT
  - GEOIP,private,DIRECT
  - GEOIP,cn,DIRECT
  - GEOSITE,private,DIRECT
  - GEOSITE,cn,DIRECT
  - GEOSITE,apple,DIRECT
  - GEOSITE,microsoft@cn,DIRECT
  - GEOSITE,category-games@cn,DIRECT
  - GEOIP,telegram,PROXY,no-resolve
  - MATCH,PROXY

内网主机192.168.1.42/32在bypass-source.yam中,但似乎在不在都不影响结果

huzheyi commented 2 months ago

另外我的ip rule与你的不太一样

0:  from all lookup local
9000:   from all iif br0 goto 9003
9001:   from all goto 9010
9003:   from all nop
9003:   from all to 198.18.0.0/30 lookup 2022
9004:   not from all dport 53 lookup main suppress_prefixlength 0
9004:   from all ipproto icmp goto 9010
9004:   from all iif Meta goto 9010
9005:   not from all iif lo lookup 2022
9005:   from 0.0.0.0 iif lo lookup 2022
9005:   from 198.18.0.0/30 iif lo lookup 2022
9010:   from all nop
32766:  from all lookup main
32767:  from all lookup default
halfroom commented 1 day ago

我也遇到了,我是在debian主机上直接用docker host启动tun模式,所有我用docker容器服务映射的端口,路由器在公网上开放端口转发进行访问,公网都收不到响应。但如果我容器用docker host模式启动,访问正常。。回退到老版本能解决吗?