JoinMarket-Org / joinmarket

CoinJoin implementation with incentive structure to convince people to take part
398 stars 119 forks source link

Make blacklist check less time sensitive / allow serial !fill messages #723

Open eduard6 opened 7 years ago

eduard6 commented 7 years ago

It seems that the current Maker blacklist implementation works like this:

  1. ...
  2. T: !fill ... <commitment>
  3. Maker checks if <commitment> is on blacklist
  4. M: !pubkey ...
  5. T: !auth ...
  6. M: !ioauth ...
  7. Maker broadcasts Taker <commitment> (using self.maker.transfer_commitment() / !hp2)
  8. ...

This presents a number of problems:

A better system would:

Suggestions for a better system:

AdamISZ commented 7 years ago

Sorry for long delay before responding. I was hoping we'd get further in #693 but then got sidetracked working on other things.

Creates a race condition that allows a Taker to get UTXOs from an unlimited number of Makers using a single commitment, defeating the purpose of having commitments. Taker just sends !fill requests in parallel to as many makers as they want using the same commitment.

This is by design. It doesn't defeat the purpose of having commitments. It's what I saw as the most realistic rate limiting we could do in a reasonable way. Fundamentally, once a commitment is sent to any counterparty, it's potentially broadcast (because bad actors could be running multiple bots in any case). Also, potential improvements for the Taker experience based on selecting a large group and then filtering down would require this aspect.

Could cause slower Makers to reject a valid commitment. This is especially relevant for Makers using Tor or if a Tor-based data channel is used in the future.

I was very careful about this point, and was under the impression it was nearly impossible. See the second paragraph in this comment; although I'm not stating it as a correction, your sequencing above is correct, i.e. :

Makers will only reject either immediately on fill, because they've already seen the commitment, or add it and broadcast it after ioauth. Since the taker does all fills first, receives pubkeys, sends auths, and only then do the ioauths occur, it would require e.g. to send out two fills, have the first received by maker A and go through all processing up to ioauth, and broadcast, and then have maker B receive that broadcast, even before they'd received the very first fill. So not only would they have to have a super slow connection, it'd have to be selectively ultra-slow for the !hp2 from maker A to maker B to be seen before the fill from the Taker. Since messages are coming into a queue over the network I can't see it realistically happening, although I don't deny it is logically possible, depending on the network layer.

Prevents dynamic selection of orders by a Taker. This means a Taker is bound to the orders they originally selected even if some of Makers don't respond.

I hadn't considered this possible approach: firing some fills, and then firing more if you're not happy with what comes back. But the idea here is we're trying to rate limit, so what's wrong with just firing a very large number (say 15 if you only want 5), and then choosing a subset? This is exactly what @adlai had proposed as his 'taker sophistication' approach, although it hasn't been done yet. One can see it as the natural extension over the current 'allow to complete with subset', but just adding more intelligence. I think that would be simpler.

On your last points, proposing changes:

Re: use multiple times, I'm not really sure. Obviously you intend this as different from simply the fact that we can use the same utxo multiple times (with different commitments). Re: a time limit after first use, I'm not sure either.

My intuition is that further complexifying this already complex system is going to be really hard to reason about. The virtue of the current system is there is one item, and when it is seen, it is used. Broadcast in its simplest version.

eduard6 commented 7 years ago

But the idea here is we're trying to rate limit, so what's wrong with just firing a very large number (say 15 if you only want 5), and then choosing a subset? This is exactly what @adlai had proposed as his 'taker sophistication' approach, although it hasn't been done yet.

The 15 -> 5 would be an improvement for sure and result in less failure to source commitments. I was not aware that was being discussed.

Creates a race condition that allows a Taker to get UTXOs from an unlimited number of Makers using a single commitment, defeating the purpose of having commitments. Taker just sends !fill requests in parallel to as many makers as they want using the same commitment.

This is by design. It doesn't defeat the purpose of having commitments.

I don't understand this still. If 15 -> 5 is possible why is 150 -> 5 not possible? If I can get any number of Makers to respond to a !fill request using just one commitment, can I not get all Maker UTXOs easily? For example if each Maker has 6 UTXOs I could do:

For each Maker (leaving out not relevant parts of the protocol):

If the !fill are sent in parallel to all Makers then a Taker can get up to 6 ulist from each Maker using only two Taker UTXO. Of course each ulist may not be unique but using different amount there is a good chance a Taker could get every UTXO a Maker would use for its next JMTX.

If my understanding is right then I would say the rate limit is not a really a limit for an attacker. Using 2 UTXO to get a good map of all Maker UTXOs is very low cost.

On your last points, proposing changes:

Re: use multiple times, I'm not really sure. Obviously you intend this as different from simply the fact that we can use the same utxo multiple times (with different commitments). Re: a time limit after first use, I'm not sure either.

My intuition is that further complexifying this already complex system is going to be really hard to reason about. The virtue of the current system is there is one item, and when it is seen, it is used. Broadcast in its simplest version.

Though I do like serial !fill better and think it is simpler overall I am willing to agree it is a matter of taste. However if my above understanding is right then I think the current rate limit is not really a sufficient limit. A switch to "broadcast immediately, add to blacklist immediately" with a limit on the number of times a commitment can be used would I think make the rate limit more of a limit and would also make serial !fill possible as an alternate Taker strategy. To be clear about my proposal, the idea is:

Makers would reject commitments after seeing them X times, e.g. X = 10.

With this setup Makers pause between receiving !fill and sending !ioauth in order to see any !hp2 from other Makers. The result is each <commitment> can only be used X times across all Makers and there is no race condition with parallel requests.

AdamISZ commented 7 years ago

On your first scenario, yes, 1 taker utxo (so 3 taker commitments) can get 3 maker utxo lists from each maker. Re: 15->150, yes, within the limitations of the messaging layer (i.e. assuming it accepts this burst of broadcast traffic), you can get information from all makers this way.

However, I don't think it does give them a good map. Most makers have lots of utxos (30-50 is not atypical), and selecting amounts to extract the maximum number of them is entirely non-trivial, since they only publish a max and min and you have no idea where the mixdepth boundaries are, nor do you know which selection algorithm they're using. And even if you did, you will still only get a subset. Moreover whatever you do get, as a snapshot, will not help much in identify cjouts unless you repeat the process again, after any new transactions. I'm sure you've read my "racing" blog post about this topic.

Interestingly, after the 0.2 update with this feature, I was checking for a long while, and I saw evidence highly suggestive of the fact that this attack stopped entirely, and didn't restart. Because of the "racing" nature of the attack, it would have to be done steadily, over time, if you wanted to get anywhere in finding cjouts. I'm of the opinion that the previously existing attackers abandoned it entirely, probably because they didn't want their own utxos leaking.

In recent months I see basically no started-but-not-completed transactions as a Maker (recently I've been doing about 10/day with very low fees). I do occasionally see "Authorisation failed" - but from debugging problems with people in IRC this seems to be mostly people trying to force through transactions by tweaking their utxo config variables in the config file. Failures from the tumbler will crash that side, if they don't edit the config, they won't get to the Maker (unless something is screwy with their blockchain interface giving them bad info, that can happen too).

I'm of the opinion that we should focus on the #693 thing; if it can be done in a sane way, I think we have a decent level of defence on both sides, albeit an attacker that doesn't mind leaking their utxos can still try this attack. Like I say, right now, it isn't happening.