opengsn / gsn

GSN v3.0.0-beta.10 - Ethereum Gas Station Network
https://opengsn.org/
592 stars 210 forks source link

Run a relayer (v1/v2) #356

Closed gjeanmart closed 4 years ago

gjeanmart commented 4 years ago

Hi,

I'm looking into running my own Mainnet GSN relayer but right now, there is a bit of confusions in the resources available online between GSN v1 and v2, OpenZeppelin and OpenGSN...

Questions:

Thanks :)

PS: I know you are currently in some kind of transition, so I understand the little confusion (no problem), just trying here to get my head around.

drortirosh commented 4 years ago
gjeanmart commented 4 years ago

Thanks for your answer.

I've done a few research over the weekend and still would like to deploy a GSN v1 relayer and potentially migrate to GSN v2 when ready.

drortirosh commented 4 years ago

Thank you very much for the effort. GSN is a community effort, and we used David Mihal's docker-compose idea as it was better than what we had. I'd love to see how your k8s solution work out. At first glance, k8s looked much more complex (and support more complex redundancy scenarios - which we can't use), but if there's a simple, robust configuration we'd love to use it.

For this reason a relayer can't be behind a load-balancer: the client has to know the exact relay he uses, as he signs a request for that specific relayer. We did have thought about using AWS lambda or Google Functions for a relay - but again, the storage of the private-key was problematic. "AWS Certificate Mananger" and the like might be used, but we didn't have time to research into it.

gjeanmart commented 4 years ago

Thanks !

I understand the redundancy trust issue here but I don't think you can guarantee this property, it is easy to share the key between two machines set behind a load balancer. The client might not like it, he will never know...


About k8s, I can see two use-cases while using it without redundancy: (1) I am advocate for digital decentralisation and blockchain isn't the only solution, self-hosting can help decentralisation. I am member of a small community of passionate IT people building homelabs and self-hosting platforms. One solution that may sound overkill but ends up very cool, powerful and easy to maintain is Kubernetes at Home (here is my guide about k8s and raspberry-pi). The rinkeby relayer 0x8a4dfc7f236b03e33ad4bc4c2725f9272490d487 is currently running on a RaspberryPi Kubernetes cluster in a closet :)

(2) Many companies and projects are already orchestrating their entire infrastructure with Kubernetes. Adding a new "single" pod dedicated to a GSN relayer is made easy and along the line of the existing infra.

Using Helm, it only takes 3 min to deploy a GSN relayer on a k8s platform.


I found a "risky" workaround right now to make the container stateless. I rebuild the image mihal/gsn-relay-xdai with an additional line to start the server during 2 sec and generate a key. As long as I keep this image for myself, it does the job :)

FROM dmihal/gsn-relay-xdai
RUN timeout 2s /app/bin/RelayHttpServer || :
drortirosh commented 4 years ago

I actually like the Relay-on-Pi idea very much.. the GSN2 relayer was re-written in javascript, which makes it easier to port to other processors.

Instead of putting the key into the image, I prefer leaving it inside the container, by removing the "app/data" mapping to the host. In this case, you need to remember not to do docker-compose down (or the k8s equivalent).

About sharing the private-key: This is very dangerous thing to do, and not because of its secrecy: If 2 machines happen to use the same PK, then one Tx will get loss, but even worse: the relay just committed an attempted "fraud", and can be penalized for that: any one seeing these 2 transactions with same nonce can put them on-chain (penalizeRepeatedNonce) and slash the stake of the relay. So in order to use the same key on 2 machines, you need some inter-machine mutex to AVOID using the same key on 2 machines...

In GSN, redundancy and scalability is achieved differently: the client ping multiple relays, and attempt to connect to the first one that answers. this way, it knows this relay was available a second ago. if that fails, it continues with the next pinged relay. Reverse proxies and load-balancers are of very little help here.

gjeanmart commented 4 years ago

Thanks. That makes a lot of sense.

Understood the risk of nonce clashing when sharing the same key between several relayers and the mechanism to prevent that (punishment).

Using containers instead of images as a "reference" can be quite a risky game, a container is a snapshot of an image, it is by definition supposed to have a short life, replaceable in case of failure (and duplicable if scalability is wanted). I understand it would work with docker-compose as Docker "caches" containers so they can be re-used and start fast. But $ docker-compose down and $ docker rm <cid> can make you lose your relay fund forever. Kubernetes doesn't work like this, especially as it designs to run in a cluster of machines, a container (pod in k8s jargon) is an ephemeral instance of an image. One day it can run on worker-1, the other on worker-2. On the other hand, the custom image works but as long as we set the -Workdir outside the volume /app/data and same problem, we have to be sure to not lose the image (private Docker registry would definitely help).

Anyway. Interesting discussion, happy to help to make GSN more cloud friendly and production ready :)