Open yaser-k opened 9 months ago
please send the logs in debug
mode but make sure you remove any sensitive data (IP etc) from it.
here is the log from start service until it stop responding to https. I replace all requested URL with example.com. sniproxy log.txt
I can't see any issues in the logs. I don't see this issue in any of my other instances as well. Reckon the issue might not be from sniproxy
Do other instances use upstream socks? I think the problem is how sniproxy manage socks connection. I think sniproxy create a new socks connection for every request. am i right?
yes I believe so. I'll do more testing with SOCKS5 upstream and will let you know. can I ask which implementation of SOCKS5 you're using as upstream?
I am using xray core that implement my inbound SOCKS5 and outbound connection. https://github.com/XTLS
You can see my xray config in the attached file. config.txt
and this is my related settings in sniproxy config:
upstream_dns: "tcp://127.0.0.1:54" upstream_dns_over_socks5: false upstream_socks5: "socks5://127.0.0.1:10808"
I did some tests using brook
as the upstream SOCKS5
. I can confirm performance issues when upstream_dns_over_socks5: true
but when upstream_dns_over_socks5: false
, everything works as expected. would you be able to replicate your tests with brook
?
I test the brook like sniproxy --socks5--> brook --socks5--> xray. this implementation seems to work more stable that I think because of some sort of socks connection management in brook. Although I get these errors time to time could not connect to target with error: read tcp 127.0.0.1:18822->127.0.0.1:10810: i/o timeout service=https [DNS] Empty DNS response for ***.com. service=https
connection pooling is something I'm willing to consider. need to finish up some clean ups and make sure the dns client itself is working properly first.
I set up this sniproxy with upstream socks5 upstream_dns: "tcp://127.0.0.1:54" (its a long story but it works) upstream_socks5: "socks5://127.0.0.1:10808"
The upstream socks and upstream DNS are another process on the same machine. Every time I stop and start sniproxy.service it works for a few seconds, answers some https responses correctly, and then stops responding. When it stops responding I check the following modules separately: upstream DNS: working. test with: dig +tcp @127.0.0.1 -p 54 example.com upstream socks5 working. test with: curl -x socks5h://127.0.0.1:10808 https://api.ipify.org?format=json sniproxy DNS is working - dig @127.0.0.1 -p 53 example.com ( example.com is on the domains.csv so return sniproxy IP)
but the https module of sniproxy does not respond anymore. test with: curl https://api.ipify.org?format=json --resolve api.ipify.org:443:sni_proxy_IP when restart sniproxy (systemctl restart sniproxy.service) then it starts responding again for a few seconds and the above test also work.
When I get "lsof -i -P -n | grep sniproxy" I see a lot of CLOSE_WAIT connections from clients, don't know if it is relevant. TCP sniproxy_ip:443->client_ip:port (CLOSE_WAIT)
If you need sniproxy logs I can send it but I don't know where to look.