Closed cirocosta closed 3 years ago
^ ultimately, all that we need to truly implement is the reconciler that
fills up the secret based on what tor -f <>
on a torrc
that states a
HiddenServiceDir
gives us.
the rest (mounting the secret, generating a configmap, etc etc) is all stuff
that we would expand in the MoneroNode
reconciler based on a field
under moneronode.spec
.
TBD: making tor observable - it'll be interesting to have a hidden service running for a while and try to learn some of the best practices around maintaining that tor proxy running and being able to observe its behavior so we can tell whether everything is healthy/not.
something to figure out: multi-replica tor setup.
I'm currently divided between https://onionbalance.readthedocs.io/en/latest/ and a simpler single-entrypoint + l4 loadbalancing.
so far, l4 lb seems to be the most appropriate for a single-cluster setup as you already have all the internal connectivity sorted out (although it might be worth mTLS'ing load-balancer --> backends), which you could then do high-availability between multiple clusters using onionbalance
on making tor observable: sounds like what we need is to gather some stats out of what we can do via its ControlPort.
see https://stem.torproject.org/api/control.html and spec: https://gitlab.torproject.org/tpo/core/torspec/-/blob/master/control-spec.txt
also, might be worth giving a try at https://github.com/oasisprotocol/curve25519-voi for
pub & priv key generation without bringing tor
up for filling the secret.
oh this is really interesting: https://github.com/lightningnetwork/lnd/blob/b1d9525d29f9f2995402d9a81a8d4fd1a5303c7a/tor/add_onion.go
(another example: https://github.com/cretz/bine/blob/master/control/cmd_onion.go)
sounds like you're able to "request" the creds right from the control port and get back priv key etc
https://github.com/rdkr/oniongen-go worked great for the generation 👍
something to figure out: multi-replica tor setup.
so, sidecar approach for replicas=1
works really well - pretty cool stuff.
definitely need to figure out the multiple replica scenario.
I'm at the moment tending towards:
monerod->tor
traffic, butreplica number
, extra deploy for tor->monerod
ingress, but having the backends load-balanced via some l4 OR a making tor send the requests to an internal clusterIP service (which by definition would be a l4 k8s-native balancing regardless of the underlying infra)updated:
a proof of concept revealed to be quite effective - the code is not great, but it was nice to demonstrate that it can work pretty well :) having the CCS accepted, work can start being done on moving from prototype -> decent implementation.
Tor support
overview
Support for Tor is provided on two fronts:
utxo.com.br/tor
-labelled secretsmonerod
instances with a Tor sidecar that acts as ingress and egress for Tor traffic, as well as applying the proper args formonerod
.Through the combination of both, one gets the ability of having a full monero node, on any VPS or cloud provider, serving over both clearnet and tor with no more than 5 lines of yaml:
yes, powerful.
secret reconciler
This reconciler, based on labels, is able to take action on those secrets that would want to be populated with Tor credentials.
All you need to do is create a Secret with the annotation
utxo.com.br/tor: v3
for av3
service (v2
will be deprecated anyway, so why bother).(ps.: if the secret is already popoulated, the reconciler WILL NOT try to populate it again)
For instance, we can create a Secret named
tor
which after reconciliation will see its
data
filled with the content of the files you'd expect to see underHiddenServiceDir
:(you can see if things went good/bad through events emitted by the reconciler)
With those filled, we're then able to make use of them in the form of a volume mount in a Tor sidecar which then directs traffic to the main container's port through loopback - after all, they're in the same network namespace.
A full example of a highly-available hidden service:
ps.: notice that there's no need for
Service
- that's because we don't need an external ip or any form of public port; this is a hidden service :)an interesting side note here is that we not only are able to expose our service in the Tor network, but we also have access to it via
socks5
by making requests to the sidecar under 127.0.0.1:9050 (again, same network namespace!)_ps.: note the use of the
ONION_ADDRESS
environment variable - that's in order to force redeployments to occur whenever there's a change to the secret - see https://ops.tips/notes/kuberntes-secrets/_tor-enabled monero nodes
As
MoneroNodeSet
s create plain core Kubernetes resources in order to drive the execution ofmonerod
, we can do the same for enabling Tor support.Just like with non-Tor nodes, we want to still be able to create notes with nothing more than a request for monero nodes:
As Tor support should be just as simple as clearnet, making it Tor-enabled takes a single line:
Under the hood, all that we do then is create one extra primitive, a
utxo.com.br/tor
labelledSecret
, which we then mount into a Tor sidecar container that using the credentials, is then able to proxy traffic from the Tor network intomonerod
via loopback, as well as serve as a socks5 proxy for outgoing connections (through loopback as well).