stellar / stellar-protocol

Developer discussion about possible changes to the protocol.
525 stars 303 forks source link

Address Fee-Paying For Future-Proofing Transactions #133

Closed jedmccaleb closed 4 years ago

jedmccaleb commented 6 years ago

This will allow you to set a higher fee so the tx will still work during surge pricing also helps future proof your txs.

Current Proposals:

robdenison commented 6 years ago

One mechanism that could enable this would be a uniform-price auction:

1) Sort transactions by priority (either using the current algorithm, or using the algorithm suggested here). 2) Select transactions until the total number of transactions (or operations, see #75) is too high. 3) If the transaction queue is now empty (i.e., every transaction has now been selected), all transactions pay the minimum fee necessary (based on the minimum feerate * their operation count). 4) If the transaction queue is not empty, then all transactions pay the same feerate as the lowest-feerate transaction that was in fact included in the block.

That way, transactions can set a fee higher than the minimum, with the comfort of knowing that their transactions will only pay the minimum necessary to actually get included in a block.

I think the challenges described in this post should already mostly be avoided thanks to Stellar's fee mechanism, where fees are pooled and redistributed through inflation, rather than being given to a particular miner who is responsible for choosing transactions for that block. Validators do not have any particular incentive to manipulate transactions to force fees to be higher, and it would be costly to create fake transactions themselves to do so.

MonsieurNicolas commented 6 years ago

If the transaction queue is not empty, then all transactions pay the same feerate as the lowest-feerate transaction that was in fact included in the block.

I think there is an interesting edge case to discuss.

I don't know if it's worth dealing with it, but it's probably worth discussing.

What we're talking about is picking the top n transactions

tx[1..n].maxfeerate >= X
tx[n+1].maxfeerate == Y

Note that what is specified in the transaction is not the per operation fee rate, but the maximum total fee (ie, already multipled by the number of operations).

I think that if X == Y it's fair to pick actual_fee_rate = Y (what your solution would pick)

with X > Y While we could pick actual_fee_rate=X, we could also pick a much less aggressive actual_fee = Y*nb_op+1 that meets the criteria without causing a spike in fee.

robdenison commented 6 years ago

While we could pick actual_fee_rate=X, we could also pick a much less aggressive actual_fee = Y*nb_op+1 that meets the criteria without causing a spike in fee.

Good point. Technically this version of the mechanism would be gameable—you could raise the fees others paid by submitting a transaction that does not actually get included in the block. But there isn't much of an incentive to do that (since fees are burned), and in the worst case it would resolve to the same thing as the default rule I was proposing, and this is an edge case (that is probably hard to cause or predict) anyway, so it sounds fine.

JeremyRubin commented 6 years ago

I think enabling fee bumping (via child pays for parent) is a more flexible option.

One way to enable this is to add a new operation that specifies a min sequence number for another account to be in in order for that tx to go in, and then specifies an amount of fee to be distributed across pending txns before that sequence. Distribution to txns before that sequence should be distributed on some sort of 'even' basis so that the whole package is bumped.

Technically, this can be done implicitly, but being explicit serves as a hint for implementation.

robdenison commented 6 years ago

Jeremy pointed out to me one problem with this naïve auction mechanism—it could result in unnecessary fee spikes. If there are 51 transactions (or 1001 operations, after that change takes effect) with a very high maxfee, and they are all submitted at the same time, they will end up in the same ledger paying max fee, whereas if just one of those were delayed until the next block, they would all end up paying the minimum fee.

In other words, a single-ledger uniform-price fee auction doesn't take into account the fact that in the typical case, people would rather have their transaction take 5 extra seconds but pay the minfee, rather than pay the maxfee to be included immediately.

If you're in control of when you submit the transaction, this might be mostly fine (although you may not be able to predict the other transactions that will be submitted at the same time). But if someone else is submitting a transaction for which you are responsible for the fees, they could time the transaction submission (and combine it with other such transactions) to maximally grief you. While this situation seems esoteric, unfortunately it is very difficult to avoid in the case of payment channels.

Perhaps transactions should be allowed to express an additional limit on their fees: a maximum permissible increase in feerate from the previous block's minimum fees. The protocol would take the lower of those two numbers, and treat that as their "bid" in the uniform-price auction. The rest of the auction could occur in the same way.

An alternative would be to have this maximum permissible increase in feerate each ledger be a protocol-wide constant, although that would hamper a user's ability to opt-in to specifying a transaction as particularly urgent.

I think enabling fee bumping (via child pays for parent) is a more flexible option.

I think the maxfee design is better for presigned or preauthorized transactions, because it allows the party who prefunds the fee to be different from the party trying to get the transaction included, while still protecting the former party from having to pay the fee unless it is actually necessitated by current ledger conditions. This is particularly important for payment channels.

For most other cases I think "replace-by-fee" (allowing a pending transaction to be replaced by one with the same account and sequence number, if the new one has a higher fee) should be good enough. Is this currently implemented? (Sorry, this is getting a little off-topic from the current post.)

JeremyRubin commented 6 years ago

Having a max fee increase per block doesn't help; then rational (fee-maximizing) transaction selection is to delay that transaction until it is paying the maximum fee.

I think we're best off making RBF and CPFP work because they can technically require no actual change to any transaction structure, and just represent more rational decision making for the network.

I think that most 'maxfee' cases can more safely be handled by a combo of RBF and CPFP.

robdenison commented 6 years ago

Having a max fee increase per block doesn't help; then rational (fee-maximizing) transaction selection is to delay that transaction until it is paying the maximum fee.

Only if there are sufficient other transactions competing for space that bid up the transaction fee over several blocks.

I do see the point that if validators had a very crowded mempool and were willing to do anything to maximize the fees paid by transactions, they could censor the high-maxfee transactions (basically pretend they don't see them) until the medium-maxfee transactions bid up the feerate sufficiently, which is perverse. But I don't think it's right to base the transaction selection algorithm on this particular model of "rational" validator behavior.

First, validators are not really incentivized to maximize transaction fees, because fees are burned and redistributed to inflation recipients, rather than paid to validators (although, of course, validators may also be inflation recipients).

Second, I think it is fair to consider this kind of censorship (which requires actively pretending not to have seen particular transactions) dishonest behavior. A coalition of validators that actively censors transactions can undertake much more damaging and profitable attacks, including causing payment channels to be closed at outdated states.

In fact, I believe that multi-block censorship of particular transactions would require cooperation of not only a blocking set, but an entire quorum, because of the anti-censorship mechanisms built into the SCP's nomination protocol. A colluding quorum, of course, has nearly unbounded capacity for mischief.

I think we're best off making RBF and CPFP work because they can technically require no actual change to any transaction structure, and just represent more rational decision making for the network.

I agree that RBF (allowing a transaction to replace a pending transaction if the new one has the same source account and sequence number but higher fee) is a priority. CPFP the way you're proposing it would require a new operation and a pretty substantial change to how transaction selection works, and would have some weird edge cases. For example, I think doing this as an operation would be dodgy because you need the bumping amount to be paid whether the bumping transaction fails or succeeds. And I worry that you would end up with a knapsack problem. Do you want to open an issue with a proposal for CPFP and we can discuss there?

I think that most 'maxfee' cases can more safely be handled by a combo of RBF and CPFP.

Unfortunately, not the one we need for payment channels, which is "presigned transactions where one party—not necessarily the party trying to get the transaction included—is always responsible for fees, but only pays what is actually needed to get the transaction included." That use case almost by definition requires a fee mechanism that has some awareness of the current fee market.

MonsieurNicolas commented 6 years ago

To answer the question: "replace-by-fee" is not implemented, transactions are rejected by validators right away if they overlap (sequence number wise) with pending transactions. I agree, we need this for sure.

CPFP (or whatever name makes sense) deserves its own issue to discuss, I like that it makes any transaction "future proof". @JeremyRubin you can take the lead on this one? I don't know how bad it is going to be to implement it - I think we should investigate in parallel that solution regardless (we may not need to change the current behavior if it's something we deem simple enough).

Let's continue discussing the default fee behavior in this issue.

Going back then to what seems to be the main problem:

Jeremy pointed out to me one problem with this naïve auction mechanism—it could result in unnecessary fee spikes. If there are 51 transactions (or 1001 operations, after that change takes effect) with a very high maxfee, and they are all submitted at the same time, they will end up in the same ledger paying max fee, whereas if just one of those were delayed until the next block, they would all end up paying the minimum fee.

I agree that it's sub-optimal from a fee point of view, the problem, I think, is that we can't really put the delaying logic in validators as delaying a transaction seems to create worst problems than it solves: if more transactions are queued up, there is then no guarantee that the transaction will make it (and eventually it will get dropped).

I'd rather keep the responsibility of the validators to be: process as many transactions as possible per ledger regardless of fee. On that note, this is also how we decide between two competing transaction sets during consensus (that, unlike the transaction set selection, is part of the protocol): we take the one with most transactions (how we decide to prepare a value from a set of values that got nominated). I think that using total fee (or lowest fee) as a way to order transaction sets is not wise at this stage either.

MonsieurNicolas commented 6 years ago

I put together a first draft of https://github.com/stellar/stellar-protocol/blob/master/core/cap-0005.md that should address this particular issue.

theaeolianmachine commented 5 years ago

Went ahead and updated the initial comment with the three main proposals we're discussing at the moment for solving this problem.

jonjove commented 4 years ago

Addressed by CAP-0015 and implemented in https://github.com/stellar/stellar-core/pull/2419