cmehay / docker-tor-hidden-service

232 stars 55 forks source link

Kubernetes #14

Open tgeek77 opened 7 years ago

tgeek77 commented 7 years ago

This isn't really an issue, but a question. Has anyone successfully used this container in Kubernetes and if so, how did you set it up?

cmehay commented 7 years ago

I'm not familiar yet with Kubernetes, so I'm afraid I will not be able to give you an answer, but feel free to send a patch to run this container in a Kubernetes cluster.

AndreKoepke commented 5 years ago

Pretty simple. Just place the environments-vars like in the https://github.com/cmehay/docker-tor-hidden-service/blob/master/README.md.

apiVersion: v1
kind: Pod
metadata:
  name: tor-expose
spec:
  restartPolicy: Never

  containers:
  - name: tor
    image: goldy/tor-hidden-service:latest
    env:
      - name: <SERVICE-NAME>_TOR_SERVICE_HOSTS
        value: "<EXPOSE-PORT>:<HOST>:<HOST-PORT>"
ghost commented 4 years ago

or you have from Docker Compose to Kubernetes https://github.com/kubernetes/kompose

fabacab commented 3 years ago

AndreKoepke's advice about running this container as a Kubernetes Pod would work, but is not "the Kubernetes Way™" because in Kubernetes, Pods should be considered ephemeral as they can be rescheduled without a human operator.

For most Kubernetes use cases, you probably want a Service (not a Pod), because a Service address is stable across the entire cluster. For that, consider kragniz/tor-controller, which is a native Kubernetes CRD (Custom Resource Definition) describing a new OnionService API resource backed by a Tor server running in a pod very much like Andre's suggestion.

Besides the Kubernetes packaging, the biggest difference is that it isn't actually this container, but it should do what you want in a much more Kubernetes-friendly way.

tgeek77 commented 3 years ago

I was thinking that it could be done like this The pod would be webserver + tor sidecar for networking. /var/lib/tor would be a secret that every pod could use. I haven't actually created yet, but it might work.

I spoke with kragniz about this on the k8s slack but he said that the "status is on hold until I upgrade the project to kubebuilder 1.0" and that was in January of last year.

ghost commented 3 years ago

I'm experiencing strange connect resets when using this container on K8s. Sometimes my request goes trough without any issues and most of the time the connect gets reset for some reason ... Does smb maybe has a solution or a Idea where I could possibly look at?!?

tgeek77 commented 3 years ago

@venomone I would run:kubectl logs -f on the pod running Tor to see if Tor is throwing any errors you can see and let it run. If that doesn't work, run kubectl exec -it -- bash on the pod and try to restart the Tor daemon manually and watch the output from there. Also there might be some funky stuff going on with your CNI not knowing how to direct traffic to Tor.

You don't mention what kind of Kubernetes distro you are using, that might have something to do with it also because not all Kubernetes are the same. I would suggest only experimenting with a fully featured Kubernetes and not something like Minikube or K3s.

ghost commented 3 years ago

Hello,

thanks for your quick reply. I'm using RKE v1.19.3 on Ubuntu 18.04.5 (KVM virtualized, network MTU is 1450 - CNI plugin is set to 0 for MTU which should be auto-detect) As CNI provider I already tried Canal with no success and also weave just to check if there is any difference onto the situation but in the end the behaviour is 100% the same. Sometimes my onion site loads sometimes not (most of the time its not loading and the connection gets dropped/reset/lost) ...

The container log output is the same as on docker where everything is working fine (single-node docker and docker swarm verified working). I already came across sysctl.conf parameterization to see if there is something that might flap with conntrack - Also no succsess:

net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables=1
net.netfilter.nf_conntrack_buckets=262144
net.netfilter.nf_conntrack_max=1048576
net.netfilter.nf_conntrack_tcp_be_liberal=0
net.netfilter.nf_conntrack_generic_timeout=120
net.netfilter.nf_conntrack_tcp_timeout_established=600
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 30 
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 30 
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 15

The log output for the onion-hidden-service pod is always the same as on plain docker, docker-swarm or K8s: Bootstrapped 100% (done): Done

The only error I came across before tuning sysctl.conf was at the output of dmesg:

nf_conntrack: nf_conntrack: table full, dropping packet
[48632.803415] weave: port 18(vethwepl89c3652) entered forwarding state
[48632.810091] IPv6: ADDRCONF(NETDEV_CHANGE): vethwepl89c3652: link becomes ready
[48639.408566] weave: port 19(vethwepl227b7c5) entered blocking state
[48639.408571] weave: port 19(vethwepl227b7c5) entered disabled state
[48639.408700] device vethwepl227b7c5 entered promiscuous mode
[48639.435898] eth0: renamed from vethwepg227b7c5
[48639.535590] IPv6: ADDRCONF(NETDEV_UP): vethwepl227b7c5: link is not ready
[48639.546009] IPv6: ADDRCONF(NETDEV_CHANGE): vethwepl227b7c5: link becomes ready
[48639.546117] weave: port 19(vethwepl227b7c5) entered blocking state
[48639.546120] weave: port 19(vethwepl227b7c5) entered forwarding state
[48639.824735] weave: port 20(vethwepl6fe16e6) entered blocking state
[48639.824739] weave: port 20(vethwepl6fe16e6) entered disabled state

But that is fixed for now!

To see my full sysctl.conf have a look at here: https://pastebin.com/27qFecXX

If somebody as any idea what else I could try to get this working stable, please let me know!

Many thanks in advance :)

ghost commented 3 years ago

After some digging I figured out that it maybe has something to do with kube-proxy. If I check the logs here I got:

streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF

ghost commented 3 years ago

I found out the issue. It seems that Tor only tries to reach the backend e.g. HiddenServicePort 80 mywebserver:80 a single time. But inside of K8s multiple IP hops can appear due to fancy IPtables routing. You can see this pretty good if you try to reach mywebserver:80 using wget:

 wget mywebserver
--2020-11-16 20:07:37--  http://mywebserver/
Resolving mywebserver (mywebserver)... 10.244.0.34, 10.244.0.152, 10.244.1.125, ...
Connecting to mywebserver (mywebserver)|10.244.0.34|:80... failed: Connection refused.
Connecting to mywebserver (mywebserver)|10.244.0.152|:80... failed: Connection refused.
Connecting to mywebserver (mywebserver)|10.244.1.125|:80... failed: Connection refused.
Connecting to mywebserver (mywebserver)|10.244.2.177|:80... connected.

It seems that Tor simply drops the request and no .onion site is getting loaded at your browser if there is any kind of redirect at the route to mywebserver. To solve this behaviour it's required to obtain the actual IP of the >>POD<< and use a static one for this. Otherwise you always have to bet on CoreDNS and IPtables not to do any fancy bullshit with the ip-packet.

It might be a good Idea if you have a webserver (Reverse-Proxy like nginx) directly embedded into the container as it might get pretty difficult to get such a solution done as in the end you have to get the ClusterIP of the Pod. Even if the Service has a static ClusterIP it wont work with Tor directly as IPtables routing is in between, It only works with the ClusterIP of the Pod itself ... So placing "HiddenServicePort 80 127.0.0.1:80" at torrc might be a good Idea in combination with a webserver that is reachable on 127.0.0.1:80 at any given time. nginx it self can process such creepy redirects like shown above and is also able to retry a request to the backend multiple times if its not responding immediately which is definitly the case with K8s.

And just to exclude any stupid configs at my own landscape I tried this at DO, Vultr, Hetzner, GKE, Linode with multiple CNI plugins and all have the same behaviour on this!

For K8s a new solution must be implemented.

cmehay commented 3 years ago

Well I think I get the point, but the issues definitely seems to be on kubernetes side. If kubernetes is not able to resolve the service with the good ip, this is something should be fixed in Kubernetes first.

ghost commented 3 years ago

Well I think I get the point, but the issues definitely seems to be on kubernetes side. If kubernetes is not able to resolve the service with the good ip, this is something should be fixed in Kubernetes first.

Either this or the embedded reverse-proxy tor comes with is just crap!

ghost commented 3 years ago

Problem Solved. It has turned out that ClusterIP: None in my deployment has lead to this behaviour, Beside I had a misconfiguration at my k8s labeling. Took me 2-3 days to fix this up. I can now also confirm that this project work with k8s flawlessly.