guildxyz / guild-zk

8 stars 0 forks source link

Research notes #28

Closed PopcornPaws closed 2 years ago

PopcornPaws commented 2 years ago

Description

This thread is set up to track findings and insights about existing solutions for anonymous token gating and airdrops. This is an attempt to pinpoint their weaknesses and find a solution that bridges all gaps.

PopcornPaws commented 2 years ago

Requirements

Research goal:

Let users into a discord/telegram etc server if they meet certain criteria based on their token holdings without us linking their address/public key to their discord/telegram id.

What tools do we have:

We need to set up a verification system using these tools that satisfies our research goal and maintains the highest possible user experience standard, i.e.:

Needless to say, that the implementation has to be --- mathematically speaking --- complete and sound, that is, an honest/valid proof has to always be accepted, and all invalid proofs should be rejected. We should also take care of the double spend problem, i.e. users should get access only once.

PopcornPaws commented 2 years ago

Preliminaries

Here is a collection of terms that will be helpful in understanding what's coming next.

Ring signatures

Ring signatures are an anonymity preserving form of signatures, i.e. the signer masks their identity by signing a ring of addresses/public keys such that there's no way (with today's technology) to figure out who generated the signature. This ring can be referred to as the anonymity set in which any member can generate a valid signature without revealing themselves.

Using ring signatures is an obvious first choice when we think about solving our research goal, however we quickly find ourselves in a pickle when we look at all our requirements. Problem is, that ring signatures, like any other signature scheme, use the private key of the signer directly. Unfortunately, since we want to use Metamask for our signature scheme, we restrict ourselves to ECDSA signatures that are generated in Metamask's secure execution environment that essentially nobody has access to.

However, ring signatures can be linkable which means that even though we don't know who generated the signature we can link it to existing signatures generated on the same set. Thus, we can check whether the unknown signer has already generated a signature and prevent them to double spend, i.e. pass verification multiple times. Note, that linkability is only possible because of direct access to the user's private key.

Some existing implementations and papers on ring signatures:

Zero-knowledge ECDSA

This recent work from Cloudflare presents a method that proves in zero knowledge that we know a valid ECDSA signature on a given message. This method has many desirable properties but it comes with a serious downside.

Proving knowledge of a valid ECDSA over a given message does not require direct access to the user's private key, only a signature. This signature can be generated via Metamask or other popular wallets. However, there's no way to link the proof to the user, thus, in theory, one can generate multiple valid ECDSA signatures and gain access to private channels with different accounts provided along with the proofs. We can link the same signature to a previously submitted one, however this will only occur if all signatures were generated on the same input (which we can enforce) with a deterministic RFC6979 ECDSA algorithm (which we cannot enforce). Thus, if someone has access to their private keys, they can generate different (but still valid) ECDSA signatures on the same message, preventing the verifier to link them.

Cloudflare's work has the advantage over SNARK/PLONK-based solutions that it does not require a trusted setup. Furthermore, with some optimizations, the proving time can be reduced to ~2-2.5 seconds and the verification time to under 0.5 seconds.

PopcornPaws commented 2 years ago

Existing solutions

Below is a list of a few projects that are aiming to solve the above problem or something very similar.

Stealthdrop

Stealthdrop is a SNARK-based solution for managing privacy-preserving airdrops. It essentially does the same ECDSA verification procedure stated above, but it uses SNARK circuits for proof generation instead of commitment schemes. Therefore it requires a trusted setup to run and the ECDSA circuit takes a beefy computer to evaluate (they actually outsourced the proof generation to an external server with more than 100GB RAM). Not all types of proofs take this long to evaluate but since they also assumed that the user doesn't have access to their private key, they had to choose the ECDSA proof which has the most constraints.

Next to the ECDSA verification (which proves that the user has the private key associated to the wallet address) they also generate a membership proof based on a merkle tree of eligible addresses. This is a simple proof compared to the signature proof and it runs fairly quickly on a simple laptop as well. The main issue however is the linkability of proofs.

As mentioned above, StealthDrop assumed that all ECDSA proofs will be generated via a deterministic algorithm, i.e. signing the same input will generate the same signature. However, as pointed out in this twitter thread if someone has access to their private key, they can generate valid but different ECDSA signatures on the same input and double claim the airdrop with a different proof. The only solution seems to break our UX requirements, i.e. we need to generate proofs directly on the user's private key which is not possible with Metamask currently (I'll touch up on Metamask Flask later).

a16z's PLONK-based solution

Crypto airdrop is an awesome strategy proposal to handling privacy-preserving airdrops. It builds on PLONK and it's proof system can be verified using an on-chain smart contract. Its mechanics is quite similar to the TornadoCash solution which they reference heavily.

Namely, a user who is eligible to an airdrop sends a commitment $C(pubkey + secret)$ (Pedersen-hash) to an admin that verifies that the sender is indeed eligible. They probably check their balance based on their wallet and/or acquire the commitment via a discord/telegram/etc server and assume that they are eligible if they are in the server. Nevertheless, an admin is necessary to create and manage a merkle tree of the commitments from the eligible addresses. Once the Merkle tree is constructed it can be deployed with a smart contract and users can generate a PLONK-based membership proof that their commitment is included in one of the leaves of the Merkle tree.

The project is a great showcase of how a PLONK proof can be verified on-chain, however, I fail to see how exactly is it privacy preserving. I'm probably missing something but they start with the premise that users are probably fine with sending their addresses over public channels, but not their public keys. But basically anybody can scrape a signed transaction of a given address from the blockchain and recover the respective public key. Users need to interact with the deployed contract when they withdraw, thus their address is public anyway.

Furthermore, what stops me from sending an different (eligible) address to the admin along with a commitment to my public key and secret, thus getting multiple commitments into the Merkle tree. I mean, how can the admin verify that the address which sent the commitment to the public key is indeed the address generated from that public key. And if there's no address sent along with the commitment, how is it ensured that I'm eligible if the admin receives only the commitment.

These are some open questions that require some more digging in the source code probably. But the project is definitely a cool one, not to mention the fact that Kobi Gurkhan is one of the contributors (I think he is a core dev who created Semaphore which is a similar Groth16-based SNARK system).

Cabal.xyz

Cabal.xyz works with SNARKs and a GK-based ring signature. They overcame the issue of not having direct access to the users' private keys by integrating their solution to be run in a Metamask snap. These snaps allow developers to run their custom algorithms within Metamask. For example, snaps can directly access the private key of the user and generate a ring signature within Metamask. It is important to note that snaps only work within Metamask Flask an experimental version of the Metamask extension and it's only supported in Chrome. Snaps provide great flexibility but they are experimental and not widely adopted yet. Nevertheless we should keep an eye out on how it progresses. Especially since it seems that snaps can run Wasm directly which means we can implement a GK signature in rust and run the compiled Wasm in a snap.

PopcornPaws commented 2 years ago

Proposed solution

Since we have quite restrictive requirements it's not easy to find a solution that satisfies all using technology currently available. Nevertheless I'll try to summarize how we could solve the problem. Due to having no access to the private key, we are going to need an ECDSA verification step somewhere which is not linkable. Thus, some trust will be required from the users.

Flow

  1. user generates a commitment to their public key's x coordinate in the browser 1/a. user generates a raw ECDSA signature on some message (here the message doesn't matter) 1/b. user generates a zero-knowledge ECDSA signature proof on a known message 1/c. user sends their address, the raw ECDSA, the zk ECDSA proof and guild-id (which they want to join) to a trusted admin 1/d. user saves the commitment and the respective Pedersen parameters locally (they will need it later)
  2. admin receives the address, the raw ECDSA signature, the public key commitment and the zk ECDSA proof 2/a. admin checks that the address is eligible to enter guild-id based on the requirement set. (Could use balancy) 2/b. admin recovers the public key from the raw ECDSA signature, hashes it and checks that the address belongs to the public key 2/c. admin checks that the zk ECDSA proof is valid and the committed public key is indeed linked to the signer 2/d. admin stores the commitment somewhere without disclosing which address it belongs to 2/e. admin stores the address somewhere unlinkable to the commitments so that it can check that the same address can never send another commitment
  3. some time later the user generates an ECDSA proof on a known message using the same commiment they previously sent along with their address 3/a. they send this proof and their discord id to the guild-gate verifier
  4. the guild-gate verifier checks the ECDSA proof and queries the trusted, admin-managed pool of commitments to see that the ECDSA proof used one of the commitments from there 4/a. if the proof is valid, it gives access to the discord server and removes the commitment from the commitment pool

image