MetaCubeX / mihomo

A simple Python Pydantic model for Honkai: Star Rail parsed data from the Mihomo API.
https://wiki.metacubex.one
MIT License
15.64k stars 2.56k forks source link

[Bug] SIGILL: illegal instruction #1525

Open whao opened 1 week ago

whao commented 1 week ago

Verify steps

Operating System

Linux

System Version

Linux portal-gateway 6.1.0-25-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.106-3 (2024-08-26) x86_64 GNU/Linux

Mihomo Version

Mihomo Meta v1.18.8 linux amd64 with go1.23.0 Mon Sep  2 08:35:48 UTC 2024
Use tags: with_gvisor

Configuration File

# port: 7890 # HTTP(S) 代理服务器端口
# socks-port: 7891 # SOCKS5 代理端口
# mixed-port: 10801 # HTTP(S) 和 SOCKS 代理混合端口
# redir-port: 7892 # 透明代理端口,用于 Linux 和 MacOS

# Transparent proxy server port for Linux (TProxy TCP and TProxy UDP)
# tproxy-port: 7893

allow-lan: true # 允许局域网连接
# bind-address: "*" # 绑定 IP 地址,仅作用于 allow-lan 为 true,'*'表示所有地址

#  find-process-mode has 3 values:always, strict, off
#  - always, 开启,强制匹配所有进程
#  - strict, 默认,由 clash 判断是否开启
#  - off, 不匹配进程,推荐在路由器上使用此模式
find-process-mode: off

mode: rule

#自定义 geodata url
geox-url:
  geoip: "https://fastly.jsdelivr.net/gh/MetaCubeX/meta-rules-dat@release/geoip.dat"
  geosite: "https://fastly.jsdelivr.net/gh/MetaCubeX/meta-rules-dat@release/geosite.dat"
  mmdb: "https://fastly.jsdelivr.net/gh/MetaCubeX/meta-rules-dat@release/geoip.metadb"
geo-auto-update: true # 是否自动更新 geodata
geo-update-interval: 24 # 更新间隔,单位:小时

log-level: debug # 日志等级 silent/error/warning/info/debug

ipv6: true # 开启 IPv6 总开关,关闭阻断所有 IPv6 链接和屏蔽 DNS 请求 AAAA 记录
external-controller: 0.0.0.0:9093

external-ui: ui
external-ui-name: xd
external-ui-url: "https://github.com/MetaCubeX/metacubexd/archive/refs/heads/gh-pages.zip"

interface-name: eth0

  # global-client-fingerprint: random

routing-mark: 6666

profile:
  store-selected: true
  # 储存 API 对策略组的选择,以供下次启动时使用
  store-fake-ip: true
  # 储存 fakeip 映射表,域名再次发生连接时,使用原有映射地址

tun:
  enable: true
  stack: system
  auto-detect-interface: false
  auto-route: false

#ebpf 配置
# ebpf:
#   redirect-to-tun: # UDP+TCP 使用该功能请勿启用 auto-route
#     - eth0

sniffer:
  enable: false
  ## 对 redir-host 类型识别的流量进行强制嗅探
  ## 如:Tun、Redir 和 TProxy 并 DNS 为 redir-host 皆属于
  force-dns-mapping: true
  ## 对所有未获取到域名的流量进行强制嗅探
  # parse-pure-ip: false
  # 是否使用嗅探结果作为实际访问,默认 true
  # 全局配置,优先级低于 sniffer.sniff 实际配置
  override-destination: false
  sniff: # TLS 和 QUIC 默认如果不配置 ports 默认嗅探 443
    QUIC:
      ports: [ 443 ]
    TLS:
      ports: [443, 8443]

    # 默认嗅探 80
    HTTP: # 需要嗅探的端口
      ports: [80, 8080-8880]
      # 可覆盖 sniffer.override-destination
      override-destination: true
  force-domain:
    - +.v2ex.com
  ## 对嗅探结果进行跳过
  # skip-domain:
  #   - Mijia Cloud

# DNS配置
dns:
  cache-algorithm: arc
  enable: true # 关闭将使用系统 DNS
  prefer-h3: false # 开启 DoH 支持 HTTP/3,将并发尝试
  listen: 0.0.0.0:1053 # 开启 DNS 服务器监听
  ipv6: true # false 将返回 AAAA 的空结果
  # ipv6-timeout: 300 # 单位:ms,内部双栈并发时,向上游查询 AAAA 时,等待 AAAA 的时间,默认 100ms
  # 用于解析 nameserver,fallback 以及其他DNS服务器配置的,DNS 服务域名
  # 只能使用纯 IP 地址,可使用加密 DNS
  default-nameserver:
    - 61.132.163.68
    - 202.102.213.68
  enhanced-mode: redir-host # fake-ip or redir-host

  fake-ip-nrage: 198.18.0.1/16 # fake-ip 池设置

  use-hosts: false # 查询 hosts

  respect-rules: true

  # 配置不使用fake-ip的域名
  # fake-ip-filter:
  #   - '*.lan'
  #   - localhost.ptlogin2.qq.com

  # DNS主要域名配置
  # 支持 UDP,TCP,DoT,DoH,DoQ
  # 这部分为主要 DNS 配置,影响所有直连,确保使用对大陆解析精准的 DNS
  nameserver:
    - 61.132.163.68
    - 202.102.213.68

  # 当配置 fallback 时,会查询 nameserver 中返回的 IP 是否为 CN,非必要配置
  # 当不是 CN,则使用 fallback 中的 DNS 查询结果
  # 确保配置 fallback 时能够正常查询
  fallback:
    - https://security.cloudflare-dns.com/dns-query # 指定 DNS 过代理查询,ProxyGroupName 为策略组名或节点名,过代理配置优先于配置出口网卡,当找不到策略组或节点名则设置为出口网卡

  # 专用于节点域名解析的 DNS 服务器,非必要配置项
  # 配置服务器若查询失败将使用 nameserver,非并发查询
  proxy-server-nameserver:
    - 61.132.163.68 
    - 202.102.213.68

  # 配置 fallback 使用条件
  fallback-filter:
    geoip: true # 配置是否使用 geoip
    geoip-code: CN # 当 nameserver 域名的 IP 查询 geoip 库为 CN 时,不使用 fallback 中的 DNS 查询结果

  nameserver-policy:
    "geosite:gfw":
      - https://security.cloudflare-dns.com/dns-query

proxies:
  - name: random-hysteria2-proxy
    type: hysteria2
    server: 0.0.0.0
    port: 8080
    password: password
    fast-open: true
    sni: www.example.com

proxy-groups:
  - name: PROXY
    type: select
    proxies:
      - random-hysteria2-proxy

  - name: MATCH
    type: select
    proxies:
      - PROXY
      - DIRECT

rule-providers:

  reject:
    type: http
    behavior: domain
    url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/reject.txt"
    path: ./ruleset/reject.yaml
    interval: 86400

  icloud:
    type: http
    behavior: domain
    url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/icloud.txt"
    path: ./ruleset/icloud.yaml
    interval: 86400

  apple:
    type: http
    behavior: domain
    url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/apple.txt"
    path: ./ruleset/apple.yaml
    interval: 86400

  google:
    type: http
    behavior: domain
    url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/google.txt"
    path: ./ruleset/google.yaml
    interval: 86400

  proxy:
    type: http
    behavior: domain
    url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/proxy.txt"
    path: ./ruleset/proxy.yaml
    interval: 86400

  direct:
    type: http
    behavior: domain
    url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/direct.txt"
    path: ./ruleset/direct.yaml
    interval: 86400

  private:
    type: http
    behavior: domain
    url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/private.txt"
    path: ./ruleset/private.yaml
    interval: 86400

  gfw:
    type: http
    behavior: domain
    url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/gfw.txt"
    path: ./ruleset/gfw.yaml
    interval: 86400

  tld-not-cn:
    type: http
    behavior: domain
    url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/tld-not-cn.txt"
    path: ./ruleset/tld-not-cn.yaml
    interval: 86400

  telegramcidr:
    type: http
    behavior: ipcidr
    url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/telegramcidr.txt"
    path: ./ruleset/telegramcidr.yaml
    interval: 86400

  cncidr:
    type: http
    behavior: ipcidr
    url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/cncidr.txt"
    path: ./ruleset/cncidr.yaml
    interval: 86400

  lancidr:
    type: http
    behavior: ipcidr
    url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/lancidr.txt"
    path: ./ruleset/lancidr.yaml
    interval: 86400

  applications:
    type: http
    behavior: classical
    url: "https://cdn.jsdelivr.net/gh/Loyalsoldier/clash-rules@release/applications.txt"
    path: ./ruleset/applications.yaml
    interval: 86400

rules:

  ### rule on github
  - RULE-SET,applications,DIRECT
  - DOMAIN,clash.razord.top,DIRECT
  - DOMAIN,yacd.haishan.me,DIRECT
  - RULE-SET,private,DIRECT
  - RULE-SET,icloud,DIRECT
  - RULE-SET,apple,DIRECT
    # - RULE-SET,google,DIRECT
  - RULE-SET,proxy,PROXY
  - RULE-SET,direct,DIRECT
  - RULE-SET,lancidr,DIRECT
  - RULE-SET,cncidr,DIRECT
  - RULE-SET,telegramcidr,PROXY
  - GEOIP,LAN,DIRECT
  - GEOIP,CN,DIRECT
  - MATCH,MATCH

Description

This core is running on a proxmox Debian guest with CPU passthrough

OS: Debian GNU/Linux 12 (bookworm) x86_64 
Host: KVM/QEMU (Standard PC (Q35 + ICH9, 2009) pc-q35-9.0) 
Kernel: 6.1.0-25-amd64 
Uptime: 5 days, 16 hours, 16 mins 
Packages: 490 (dpkg) 
Shell: bash 5.2.15 
Resolution: 1280x800 
Terminal: /dev/pts/0 
CPU: Intel Celeron N5105 (4) @ 1.996GHz 
GPU: 00:01.0 Vendor 1234 Device 1111 
Memory: 371MiB / 1929MiB

Reproduction Steps

Using provided config file and letting the core run, OS would auto reboot after random duration or just hang with no response and 25% CPU usage, no ICMP, SSH or even VNC (black screen) response. This is the first time I capture the fail log. Have not tested with latest Alpha branch cuz it's kind random to reproduce the failure.

Logs

Exceeded the max characters, click this link to see the journalctl log
https://codefile.io/f/Sts26x5eIt

Append

Here is the Out-Of-Memory journalctl log from the last crash. Have enabled the debug and will upload the memory dump file after next crash.

https://codefile.io/f/5EJnfbU3C6

wwqgtxx commented 5 days ago

Maybe you can try to switch to the amd64-compatible version, or an older golang compiled version such as one containing the go120 or go122 tags.

This problem is more likely to be a CPU instruction set incompatibility problem caused by virtualization, or even caused by hardware failure (such as memory errors).

whao commented 5 days ago

Maybe you can try to switch to the amd64-compatible version, or an older golang compiled version such as one containing the go120 or go122 tags.

This problem is more likely to be a CPU instruction set incompatibility problem caused by virtualization, or even caused by hardware failure (such as memory errors).

Yes it is the compatible version now, I'll try using the older golang compile and report back. Thanks for your help :)