cirocosta / monero-operator

A Kubernetes-native way of deploying Monero nodes and even whole networks: express your intention and let Kubernetes run it for you.
https://www.getmonero.org/
Apache License 2.0
19 stars 2 forks source link

tor support #1

Closed cirocosta closed 3 years ago

cirocosta commented 3 years ago

Tor support

overview

Support for Tor is provided on two fronts:

Through the combination of both, one gets the ability of having a full monero node, on any VPS or cloud provider, serving over both clearnet and tor with no more than 5 lines of yaml:

kind: MoneroNodeSet
apiVersion: utxo.com.br/v1alpha1
metadata: {name: "my-nodes"}
spec:
  tor: {enabled: true}

yes, powerful.

secret reconciler

This reconciler, based on labels, is able to take action on those secrets that would want to be populated with Tor credentials.

All you need to do is create a Secret with the annotation utxo.com.br/tor: v3 for a v3 service (v2 will be deprecated anyway, so why bother).

(ps.: if the secret is already popoulated, the reconciler WILL NOT try to populate it again)

For instance, we can create a Secret named tor

apiVersion: v1
kind: Secret
metadata:
  name: tor
  labels:
    utxo.com.br/tor: v3

which after reconciliation will see its data filled with the content of the files you'd expect to see under HiddenServiceDir:

apiVersion: v1
kind: Secret
metadata:
  name: tor-creds
  annotations:
    utxo.com.br/tor: v3
data:
  hs_ed25519_secret_key: ...
  hs_ed25519_public_key: ...
  hostname: blasblashskasjjha.onion

(you can see if things went good/bad through events emitted by the reconciler)

With those filled, we're then able to make use of them in the form of a volume mount in a Tor sidecar which then directs traffic to the main container's port through loopback - after all, they're in the same network namespace.

A full example of a highly-available hidden service:

---
#
# create an empty but annotated secret that will get populated with the hidden
# service credentials.
#
apiVersion: v1
kind: Secret
metadata:
  name: tor
  annotations:
    utxo.com.br/tor: "v3"

---
#
# fill a ConfigMap with the `torrc` to be loaded by the tor sidecar.
#
apiVersion: v1
kind: ConfigMap
metadata:
  name: tor
data:
  torrc: |-
    HiddenServiceDir /tor-creds
    HiddenServicePort 80 127.0.0.1:80
    HiddenServiceVersion 3

---
#
# the deployment of our application with the application container, as well as
# a sidecar that carries the tor proxy, exposing our app to the tor network.
#
apiVersion: apps/v1
kind: Deployment
metadata:
  name: foo
  labels: {app: foo}
spec:
  selector:
    matchLabels: {app: foo}
  template:
    metadata:
      labels: {app: foo}
    spec:
      volumes:
        - name: tor-creds
          secret: {secretName: tor-creds}
        - name: tor-creds
          configMap: {name: torrc}

      containers:
        - image: utxobr/example
          name: my-main-container
          env:
            - name: onion_addr
              valueFrom:
                secretKeyRef:
                  name: tor
                  key: hostname

        - image: utxobr/tor
          name: tor-sidecar
          volumeMounts:
            - name: tor-creds
              mountPath: /tor-creds
            - name: torrc
              mountPath: /torrc

ps.: notice that there's no need for Service - that's because we don't need an external ip or any form of public port; this is a hidden service :)

an interesting side note here is that we not only are able to expose our service in the Tor network, but we also have access to it via socks5 by making requests to the sidecar under 127.0.0.1:9050 (again, same network namespace!)

_ps.: note the use of the ONION_ADDRESS environment variable - that's in order to force redeployments to occur whenever there's a change to the secret - see https://ops.tips/notes/kuberntes-secrets/_

tor-enabled monero nodes

As MoneroNodeSets create plain core Kubernetes resources in order to drive the execution of monerod, we can do the same for enabling Tor support.

Just like with non-Tor nodes, we want to still be able to create notes with nothing more than a request for monero nodes:

kind: MoneroNodeSet
apiVersion: utxo.com.br/v1alpha1
metadata: {name: "my-nodes"}
spec: 
  replicas: 1

As Tor support should be just as simple as clearnet, making it Tor-enabled takes a single line:

 kind: MoneroNodeSet
 apiVersion: utxo.com.br/v1alpha1
 metadata: {name: "my-nodes"}
 spec: 
   replicas: 1
+  tor: {enabled: true}

Under the hood, all that we do then is create one extra primitive, a utxo.com.br/tor labelled Secret, which we then mount into a Tor sidecar container that using the credentials, is then able to proxy traffic from the Tor network into monerod via loopback, as well as serve as a socks5 proxy for outgoing connections (through loopback as well).


        StatefulSet

                ControllerRevision

                        Pod
                                container monerod
                                        -> mounts volume for data
                                        -> points args properly at sidecar

                                containerd torsidecar
                                        -> mounts volume for torrc configmap 
                                        -> mounts volume for hidden svc secrets
                                                -> proxies tor->monerod
                                                -> proxies monerod->tor
cirocosta commented 3 years ago

^ ultimately, all that we need to truly implement is the reconciler that fills up the secret based on what tor -f <> on a torrc that states a HiddenServiceDir gives us.

the rest (mounting the secret, generating a configmap, etc etc) is all stuff that we would expand in the MoneroNode reconciler based on a field under moneronode.spec.

cirocosta commented 3 years ago

TBD: making tor observable - it'll be interesting to have a hidden service running for a while and try to learn some of the best practices around maintaining that tor proxy running and being able to observe its behavior so we can tell whether everything is healthy/not.

cirocosta commented 3 years ago

something to figure out: multi-replica tor setup.

I'm currently divided between https://onionbalance.readthedocs.io/en/latest/ and a simpler single-entrypoint + l4 loadbalancing.

so far, l4 lb seems to be the most appropriate for a single-cluster setup as you already have all the internal connectivity sorted out (although it might be worth mTLS'ing load-balancer --> backends), which you could then do high-availability between multiple clusters using onionbalance

cirocosta commented 3 years ago

on making tor observable: sounds like what we need is to gather some stats out of what we can do via its ControlPort.

see https://stem.torproject.org/api/control.html and spec: https://gitlab.torproject.org/tpo/core/torspec/-/blob/master/control-spec.txt

cirocosta commented 3 years ago

also, might be worth giving a try at https://github.com/oasisprotocol/curve25519-voi for pub & priv key generation without bringing tor up for filling the secret.

cirocosta commented 3 years ago

oh this is really interesting: https://github.com/lightningnetwork/lnd/blob/b1d9525d29f9f2995402d9a81a8d4fd1a5303c7a/tor/add_onion.go

(another example: https://github.com/cretz/bine/blob/master/control/cmd_onion.go)

sounds like you're able to "request" the creds right from the control port and get back priv key etc

cirocosta commented 3 years ago

https://github.com/rdkr/oniongen-go worked great for the generation 👍

cirocosta commented 3 years ago

something to figure out: multi-replica tor setup.

so, sidecar approach for replicas=1 works really well - pretty cool stuff.

definitely need to figure out the multiple replica scenario.

I'm at the moment tending towards:

  1. pods still carry a sidecar for monerod->tor traffic, but
  2. regardless of replica number, extra deploy for tor->monerod ingress, but having the backends load-balanced via some l4 OR a making tor send the requests to an internal clusterIP service (which by definition would be a l4 k8s-native balancing regardless of the underlying infra)
cirocosta commented 3 years ago
Screen Shot 2021-05-09 at 3 49 50 PM
cirocosta commented 3 years ago

updated:

Screen Shot 2021-05-09 at 7 30 35 PM
cirocosta commented 3 years ago

a proof of concept revealed to be quite effective - the code is not great, but it was nice to demonstrate that it can work pretty well :) having the CCS accepted, work can start being done on moving from prototype -> decent implementation.