solana-labs / solana

Web-Scale Blockchain for fast, secure, scalable, decentralized apps and marketplaces.
https://solanalabs.com
Apache License 2.0
12.94k stars 4.14k forks source link

Promote transactions with requested CU #28751

Closed tao-stones closed 8 months ago

tao-stones commented 1 year ago

Problem

Need to promote users to start setting accurate compute-unit-limit (aka requested CU) for their transactions.

Proposed Solution

Sort transactions with same priority by lowest requested CU first.

tao-stones commented 1 year ago

tag @aeyakovenko @apfitzge

tao-stones commented 1 year ago

Would like to hear more inputs. I had a comment here: https://github.com/solana-labs/solana/pull/28791#discussion_r1020484409

apfitzge commented 1 year ago

Would like to hear more inputs. I had a comment here

Yeah, let's move the discussion of that here instead of in the PR.

Having a bool and using that for ordering would make it strictly only prioritize those txs with requested CUs, which seems more fair. If we go by lower CU limits, then heavier transactions are pushed lower into the queue.

tao-stones commented 1 year ago

Having a bool and using that for ordering would make it strictly only prioritize those txs with requested CUs, which seems more fair. If we go by lower CU limits, then heavier transactions are pushed lower into the queue.

yeah, then we need to update compute-budget to let call site know if set_compute_unit_limit instruction was present in transaction. Currently default 200KK is used if compute-unit-limit is not requested explicitly.

tao-stones commented 1 year ago

@t-nelson @jstarry @carllin wdyt?

Overclock-Validator commented 1 year ago

If you charge a base fee per cu that should incentivize more accurate estimation right? If you incentivize smaller transactions like this the mev people might just try and break up their transactions into separate smaller ones and then you have more sigverify work?

Looking at this https://docs.solana.com/proposals/comprehensive-compute-fees you could set default request compute to 5k instead of 200k and start charging the base fee per cu for all transactions. If they don't set the compute requested higher than that it'll just fail by default for most transactions. People are probably just to lazy to bother lowering the default from 200k.

I still like the idea of priority being a multiplier. Fee = sig cost + priority (requested compute base fee per cu). Priority is 1 by default, is an integer that is changed, and base fee per cu is just something relatively cheap but not insanely so. Just need to make sure users don't accidentally rug themselves by setting the priority way too high by accident...

tao-stones commented 1 year ago

If you charge a base fee per cu that should incentivize more accurate estimation right?

Yep, part of transaction fee charged is prioritization = requested_cu * compute-unit-price. It just the compute-unit-price isn't high enough in mainnet-beta right now, so it doesn't make too much a difference in charged prioritization fee. Hope it'll change when more user bidding for priority, with more accurate CUs.

If you incentivize smaller transactions like this the mev people might just try and break up their transactions into separate smaller ones and then you have more sigverify work?

Absolutely good point.

Fee = sig cost + priority (requested compute base fee per cu).

This is something worth consider for sure. I wish to see how priority fee works out when more user adapt to it.

Overclock-Validator commented 1 year ago

Right right. So in some future where block packing is based on scheduling based on requested compute vs estimated, compute estimations will will naturally become better when demand rises (because people will want to avoid excess fees).

What if you just start with the compute requested defaulting to something lower than 200k? 5k for example

tao-stones commented 1 year ago

What if you just start with the compute requested defaulting to something lower than 200k? 5k for example

Yea, thought that's a good alternative too. By shrinking default from 200K to much smaller size, many transactions currently not requesting CUs will fail until start requesting.

@jackcmay @Lichtso do you think OK to reduce default CU from 200K to 5K?

Lichtso commented 1 year ago

That would pretty much kill the network instantly. IMO, the only way to do it is slowly, decreasing it over many epochs.

Overclock-Validator commented 1 year ago

There probably needs to be clear announcements around it (on twitter and discord) along with decreasing it over time.... a lot of this stuff seems way to hidden and hard to find in github. The people impacted first with a step down would be MEV folks primarily (?), and they can change things pretty quickly. Wallets also still haven't added any priority functionality in general...

VinceJTorrel commented 1 year ago

Be careful with the design considerations here. Very careful. If I could pick one improvement that would make or break Solana, especially given current events, it would be this one and any others related to fee computation.

What I think is missing here is the network effect of write target contention. Its costs on the network are more quadratic than linear in the number of competitors (especially if we're talking DDoS), and arguably even worse than that under heavy load due to the cascading effects of NIC saturation and CPU cache eviction on the validators. The problem is that there's not necessarily enough time for competitors to perform price discovery under that steep curve and revise their bids accordingly, so the incentive is to just keep firing linearly increasing (but still rather trivial) priority fee bids in the hopes of winning access.

The cost increases under such circumstances must be brutal enough to shut down the contention before it explodes out of control. So perhaps it should be more like:

Fee = sig cost + priority^k (requested compute base fee per cu)

where k maybe starts at 2 and is then manually optimized by the community (or ideally, tuned in real time based on conditions during the most recently appended block, although that's not really feasible unless somehow we can get consensus about past contention via appropriate metrics recorded in that block). For that matter, I wouldn't make priority an integer because doing so just increases the chances of tied bids (and how do you fairly resolve a tie, other than to pick a random number based on some old block hash which the competitors themselves would already know?). Let it have some fixed number of binary places after the binary point.

We all hate high gas fees, but Ethereum is still running. Better to increase costs too much until an optimum is discovered, than to allow another spam shutdown of the network.

Overclock-Validator commented 1 year ago

Supposedly Quic will be much better for handling spam but you can still get line link saturation (hard to prevent this if it's purely an attack?).

But yeah, I think charging for the base compute of transactions should still happen. i.e. charge 0.000001 lamport per compute unit. Those with low latency access to the beginning of the block might be able to slip in very large mev transactions for themselves at low priority cost. A 500k compute unit transaction currently costs the same as one with 2k compute, without priority fees. Anyway, this would also have the benefit of helping to monetize the validator set better. I think the majority of the validator set is currently unprofitable. We broke even on just vote fees at 300k sol because we need to compete at 0% commission to get stake.

My favorite idea is this one below, which charges "stale" transactions, but I'm not an engineer and have no idea how feasible this system is. The issue on Solana is that you can spam and have transactions drop and never be charged. If you send 1 million transactions, only so many can fit in blockspace before expiring. With this system, you have a window after normal transaction life where a transaction becomes a low compute "stale" version where just the fee payer is charged and the rest of the transaction is ignored. They get charged for the original compute requested. It is highly parallelizable and high throughput because just the fee payer is charged. If anything, this disincentivizes wasteful spamming which validators need to pay for with being able to handle more bandwidth and resource usage. https://github.com/solana-labs/solana/issues/25211

VinceJTorrel commented 1 year ago

I don't disagree with stale transaction fees. I also have no opinion on how much of any of these fees under discussion should go toward validators and/or their stakers. The usual commission split could work just fine, or some extra accommodation could be made for the 0% ones.

The crux of the write-contention problem is this: the players in the game don't know, and can't quickly discover, how much contention actually exists. So you get a million of them bidding the minimum, 700K bidding double that, 300K bidding triple that, etc. in some sort of power-law decay. Some of these players have ways, potentially via validator "friends" or RPC oracles, to learn that the contention is fierce. So they raise their bids and try again. Then there's still contention, so they raise and try again. Because it's likely that no player knows the actual maximum bid (because consensus hasn't yet happened and it's growing steadily anyway), they tend to creep along, linearly increasing their bids. Vast volumes of traffic smash into the validators, again and again, until consensus is achieved and a winner is declared. Polynomial fee escalation can stop this wasteful process by forcing players to give up sooner (because they just can't afford the next fee increment). The downside is that it increases the chances of a tie occurring. But that's not catastrophic. It would just need to be resolved in some deterministic and fair way, for example the wallet with the least difference from the previous blockhash.

The same could apply to stale transactions by increasing the fees polynomially fast when a given fee payer attempts successively more transactions within the same slot time.

But there's another reason for polynomial fee acceleration: somebody needs to pay for the impact of all that spam. Linearly more spam creates polynomially more energy usage and latency. The first one taxes the validators, directly or indirectly. The second one taxes Solana's reputation. (Look at the ping time chart on the Explorer, for example. You'll immediately notice that ping times are not uniformally distributed. There are massive spikes due to congestion.)

Solana needs to develop zero tolerance for downtime if it wants to survive as a chain. Therefore, it cannot risk getting into a situation in which the potential reward of a transaction (say, an NFT purchase) is so high that it's worth paying all the stale transaction fees that could possibly be charged during a single slot time. Because then everyone aware of the auction, but not immediately aware of the massive contention, would be incentivized to pay them and spam away.

Spam has to actually hurt the spammer before it can hurt everyone else.

overclock-validator1 commented 1 year ago

If you charge a base fee per cu that should incentivize more accurate estimation right?

Yep, part of transaction fee charged is prioritization = requested_cu * compute-unit-price. It just the compute-unit-price isn't high enough in mainnet-beta right now, so it doesn't make too much a difference in charged prioritization fee. Hope it'll change when more user bidding for priority, with more accurate CUs.

If you incentivize smaller transactions like this the mev people might just try and break up their transactions into separate smaller ones and then you have more sigverify work?

Absolutely good point.

Fee = sig cost + priority (requested compute base fee per cu).

This is something worth consider for sure. I wish to see how priority fee works out when more user adapt to it.

Bumping this again. If i had bad intentions I could horribly wreck UX for normal users right now because there is no base fee per cu.