Closed ftasnetamot closed 5 months ago
This is a fantastic contribution, thanks. I'll try it on my side, too.
I confirm it works, perfectly, with about 10 minutes of setup. I'll go posting about this new method on past issues on transparent proxying, this is a major contribution, thanks again!
Something we had noticed with the iptable marking was that details of the setup changed the way it worked, typically "sslh running on the same host as sshd" and "sslh running on a different host" (or sshd in a docker). Considering what you say about the routing, I am not sure your method would work; do you have an opinion? I might investigate if I get the time, but that's still a scare resource :-)
I am currently playing around several scenarios. I can confirm, that it works for all setups, where the default route back to the internet flows back to the sslh hosting host. BUT: If you use the original ip of that host, all other traffic from/to internet is dead, because of the general "source" route. But as soon, as you use an additional ip address, just for the services, which should be hidden, only traffic from this ip will be caught. Playing also little bit with systemd, I will update my description within the next days with outcome of those tests.
But the biggest point for me was, to exactly understand how the packet flow happens, and why the connections look weird at the first glance.
I am right now at the opinion, that with the simple configuration of the two routing rules should go direct in the network startup configuration. That way, the startup scripts can be completly the same for transparent or classic proxy. The main reason however for this is, that with that routing rule in place, traffic from that dummy0 ip to the internet will be blocked, even when sslh is down. I see no reason -at least for the all-in-one-host solution- that the startup script should be modified.
… iptables/nftables and loopback routing.
Explain how all that works. Output of issue #443