taikoxyz / taiko-mono

A based rollup. 🥁
https://taiko.xyz
MIT License
4.43k stars 2.08k forks source link

feat(protocol): Auction #13813

Closed cyberhorsey closed 10 months ago

cyberhorsey commented 1 year ago

Describe the feature request

We want to try an implementation of an auction mechanism, ideally as an L3 running parallell with our L2 running the current tokenomics, to compare and contrast. This thread can serve as a discussion among core members to figure out the best way to make a simple, early, non-gas optimzied proof of concept.

Describe alternatives you've considered

Description of the alternatives you've considered here.

Additional context

Additional context here.

cyberhorsey commented 1 year ago

The main thing I like about this strategy vs other tokenomics is:

cyberhorsey commented 1 year ago

This is a very simple, one day WIP: https://github.com/taikoxyz/taiko-client/pull/241

https://github.com/taikoxyz/taiko-mono/pull/13811

It ignores more complicated concepts like "bidding for batches" (which is obviously more gas efficient), and simply says: after a block is proposed, you bid, and then an auction ends, and whoever says they will accept the least amount of wei per gasUsed as the reward, wins. and the proposal fee becomes some multiplier of the averageProofReward.

and then the client says here is the bidding strategy I would like to use (Always bid, Profitable only [if you can estimate the cost of proofs only], Down to a minimum amount, etc etc), and then if you are the successful bidder, you start to generate the proof.

The protocol code also has the concept of a deposit, but hasnt implemented something like, if you are late on the proof, we slash your deposit and half goes to the burner / half goes to the new prover. It also has no concept of minimum decrease under previous bid to be valid, and the auction can be postponed indefinitely by submitting a "1" wei smaller bid, because there is no concept of "the auction is over N seconds after the first bid", its only most recent bid.

It is missing a lot, obviously, but it simply something to get the discussion/ball rolling.

dantaik commented 1 year ago

Implementing a simple, batch-based auction system for prover cost-efficiency

Background

Our current tokenomics framework rewards the fastest prover, encouraging prioritization of speed over cost-effectiveness. This conflicts with Taiko's commitment to delivering cost-efficient proofs. By incentivizing cost-effective proofs, we can significantly reduce Taiko's overall Layer 2 (L2) expenses, enhancing our competitive edge in the marketplace.

Proposed Solution

I suggest introducing an auction mechanism to recalibrate provers' incentives, guiding them towards prioritizing cost-effectiveness. This mechanism facilitates provers to bid for block rewards, creating a transparent fee market. Moreover, this model promotes resource conservation, permitting provers to commit to resource-intensive Zero-Knowledge Proof (ZKP) computations only once they have definitively secured a block reward.

Design Considerations

Batch-Based Strategy

Given the high gas fees associated with Ethereum, we endorse a batch-based strategy for running auctions. This approach bestows the winning bidder with proving rights for a batch of blocks, thereby reducing per-block gas costs. During the testnet phases, we anticipate starting with a smaller batch size, with plans to scale up to 256 blocks by the mainnet launch.

Considerations for Future Blocks and Gas Costs

To mitigate the potential for accumulated ZKP delays, I suggest conducting auctions for upcoming blocks even before they are proposed. This approach does introduce uncertainty for provers due to the unknown block gas used and data size at the auction's outset. To address this, I propose a pricing model for the auction based on the gas/data usage of the auctioned block, with the block reward calculated as x*y if the winning bid is x TKO token per gas and the actual gas used by the block is y. Here, x will be termed as the bid per gas, or simply bid.

Bidding Procedures and Deposit Requirements

I suggest setting the initial bidding price for new auctions at s=2*p, while maintaining a moving average bid for all verified blocks, represented as p. Each successive bid should be at least 5% lower than the current bid. Bidders would need to deposit s * max_block_gas_limit * num_blocks * 1.5 Taiko tokens for the batch. A penalty of s * max_block_gas_limit * 1.5 Taiko tokens will be imposed and then burnt for each block the winner fails to prove within the stipulated timeframe; if successful, the deposit will be refunded.

Auction Window, Proving Window, and Multiple Auction Management

The auction window starts with the first bid and ends after either 5 minutes or 25 Ethereum blocks( or if the bid falls to 50% smaller than p?). Blocks become eligible for proving only once the auction has officially ended. The auction winner must submit the first ZKP for the block within 60 minutes of the block proposal or the auction's end, whichever is later. Other provers can submit proofs to create additional fork choices after the initial ZKP submission or upon the proving window's expiration. Multiple concurrent auctions for different batches are allowed, but I suggest limiting the upcoming 100 batches for improved manageability.

Reward and Penalty Mechanisms

If the verified block's chosen fork choice originates from the auction winner's proof, the winner's deposit and reward TKO tokens are refunded. If the chosen fork choice comes from another prover's proof, the latter receives half the deposit, and the remaining half is burnt. This mechanism ensures fair competition and deters manipulation, such as winners submitting correct proofs via different addresses.

Absence of Fallback Mode

There is no fallback fee/reward model for blocks that are not auctioned. This simplifies the auction design and eliminates the need for dual tokenomics systems - an auction-based primary system and an alternative fallback one.

Block Fees

I propose to charge a fee in Taiko tokens from the block proposer calculated as p * gas_limit, where p is the moving average bid for all verified blocks. We could introduce another moving average q to cover the bid of all unverified blocks that have been auctioned. For instance, the fee could be calculated as (p * 0.5 + q * 0.5) * gas_limit.

Challenges

Insufficient Competition

The success of the proposed auction framework heavily relies on the participation of many independent entities. If provers collude or form alliances to increase their profits, they could strategically place a single bid at the initial/highest price s, leading to a constant rise in rewards. Such behavior contradicts the system's intended cost-efficient competition.

Nevertheless, this behavior may unintentionally incite competition. As the proving reward considerably increases, it is likely to catch the attention of other provers, thus encouraging their participation. This market self-regulation could restore balance and maintain the competitive integrity of the auction process.

Additional Verification Delay

Introducing an auction window inevitably adds an extra delay to the proving time. This delay might not be noticeable when the average proving time is relatively long (over 30 minutes). Still, it could become significant in future scenarios where proof generation takes just a few minutes.

However, as emphasized at the outset of the proposal, our goal is to optimize for cost, not speed. While this additional delay is an important consideration, it's unlikely to pose a major hurdle to our primary objective of cost-effectiveness.

Brechtpd commented 1 year ago

Some previous discussion on this: https://github.com/taikoxyz/internal/issues/121

Also let's take into account batch verification of proofs onchain when possible! Otherwise we'll have to modify things again when we can do this.

  • Less math, less things can "go wrong" or be obtuse to understand.
  • Less manualy tweaking of parameters (proofTimeTarget, proofTimeIssued, etc) that cause the fee to either go exponentially to infinite, or approach 0.

It seems underappreciated that the current proving system works exactly the same as the L2 proposing fee, which in turn is based on the data blobs EIP-1559 system (which in its turn is based on the current EIP-1559)! Same mechanism, same inputs, exactly the same math, exactly the same code (well in theory, unfortunately our current code base contains two pretty distinctive paths for doing the same thing). Even on the nodes side exactly the same logic needs to be followed when proposing blocks as is currently required for submitting proofs! This is why I like in the first place, same well known system, all code we already have basically reused as is, but in practice it seems like it's not really seen that way. :(

Slightly more fair distribution depending on implementation, largely I think a core small group would still prove most of the blocks though, as you could bid through multiple addresses.

Yeah that's also my fear about an auction system. Will it actually have a different outcome then the current prover system that allows us to reuse code, in exchange for a completely new system that may end up being the same (single prover winning everything).

Why would multiple addresses help though?

after a block is proposed, you bid, and then an auction ends, and whoever says they will accept the least amount of wei per gasUsed as the reward, wins.

Our current tokenomics framework rewards the fastest prover, encouraging prioritization of speed over cost-effectiveness. This conflicts with Taiko's commitment to delivering cost-efficient proofs. By incentivizing cost-effective proofs, we can significantly reduce Taiko's overall Layer 2 (L2) expenses, enhancing our competitive edge in the marketplace.

Hmm it doesn't! It very much aims to find the most efficient prover for the target time.

To mitigate the potential for accumulated ZKP delays, I suggest conducting auctions for upcoming blocks even before they are proposed. This approach does introduce uncertainty for provers due to the unknown block gas used and data size at the auction's outset. To address this, I propose a pricing model for the auction based on the gas/data usage of the auctioned block, with the block reward calculated as x*y if the winning bid is x TKO token per gas and the actual gas used by the block is y. Here, x will be termed as the bid per gas, or simply bid.

Unfortunately gas is not a great proxy for proving cost. I think a couple of smallish risks here:

I suggest setting the initial bidding price for new auctions at s=2*p, while maintaining a moving average bid for all verified blocks, represented as p. Each successive bid should be at least 5% lower than the current bid.

I think a curve going up in time has some nice properties (see above).

The auction window starts with the first bid and ends after either 5 minutes or 25 Ethereum blocks( or if the bid falls to 50% smaller than p?). Blocks become eligible for proving only once the auction has officially ended. The auction winner must submit the first ZKP for the block within 60 minutes of the block proposal or the auction's end, whichever is later.

So you propose to still have a built-in target time into the protocol, similar to the current proving system?

Multiple concurrent auctions for different batches are allowed, but I suggest limiting the upcoming 100 batches for improved manageability.

The auctions can be for block ids %n, and so also n the number of auctions that need to be run for a sequential range of blocks, that way we make the system more parallelizable because now the prover doesn't have to be able to prove all sequential blocks himself, the work can be shared between up to n provers.

I propose to charge a fee in Taiko tokens from the block proposer calculated as p * gas_limit, where p is the moving average bid for all verified blocks.

Using the gas cost to estimate the prover cost isn't great because, regardless of the gas used in the L2 block, the prover has to pay the same gas fees to get the proof on-chain. These on-chain fees will (hopefully) be the biggest part of the proof generation cost (batching proof verification on-chain will obviously help here!).

Introducing an auction window inevitably adds an extra delay to the proving time. This delay might not be noticeable when the average proving time is relatively long (over 30 minutes). Still, it could become significant in future scenarios where proof generation takes just a few minutes.

Auctions make the most sense when we're pessimistic about prover times in the future. The redacted fee proving system would work well in the optimistic and pessimistic case which is part of why I like it.

cyberhorsey commented 1 year ago

Brecht:

Also let's take into account batch verification of proofs onchain when possible! Otherwise we'll have to modify things again when we can do this.

I'm out of the loop on this, is there another issue/TLDR? One proof can be proofs for multiple blocks?

Why would multiple addresses help though?

In the event we put some code in that says same prover can not win N blocks/batches in a row.

The auctions can be for block ids %n, and so also n the number of auctions that need to be run for a sequential range of > > blocks, that way we make the system more parallelizable because now the prover doesn't have to be able to prove all >sequential blocks himself, the work can be shared between up to n provers.

This is great, imo.

So you propose to still have a built-in target time into the protocol, similar to the current proving system?

I would think so, because we need to slash the prover's deposit if they take a week to submit a proof for a block. I was thinking some small constant grace window + state.avgProofTime though, rather than a specific built-in target time. This window ending also allows someone else to submit the proof, and receive the other % of the deposit that wasnt slashed.

I propose to charge a fee in Taiko tokens from the block proposer calculated as p * gas_limit, where p is the moving average > bid for all verified blocks.

Daniel:

I propose to charge a fee in Taiko tokens from the block proposer calculated as p gas_limit, where p is the moving average > bid for all verified blocks. We could introduce another moving average q to cover the bid of all unverified blocks that have > > been auctioned. For instance, the fee could be calculated as (p 0.5 + q 0.5) gas_limit.

What is the advantage to use the bid instead of the proof reward itself?

Brechtpd commented 1 year ago

I'm out of the loop on this, is there another issue/TLDR? One proof can be proofs for multiple blocks?

There currently is no loop to be in on. Yeah a proof of proofs so the proof verification cost on-chain can be lowered. I haven't really thought much about it to be honest, it also depends a bit on how blocks are assigned to provers, but only really two options that I currently see:

  1. Proofs are still submitted per block, but those proofs are not immediately verified. The proof data is just made available on-chain, and that prover will get the reward for that block. Afterwards another prover (or of course maybe the same one) comes in and collects those proofs and verifies them all in a new proof that is verified on-chain. Only at this point are the blocks actually proven. This extra prover also needs to be paid somehow.
  2. The block prover and the batch block prover are the same, we basically just allow multiple blocks to be proven at the same time in the smart contract, which internally supports batching to lower the verification cost. Easiest solution because no changes needed to the fees.

2 seems to be a good fit for an auction system. 1 seems to fit well when blocks aren't assigned to specific provers.

I would think so, because we need to slash the prover's deposit if they take a week to submit a proof for a block. I was thinking some small constant grace window + state.avgProofTime though, rather than a specific built-in target time. This window ending also allows someone else to submit the proof, and receive the other % of the deposit that wasnt slashed.

We have to be very careful with average proof times. If I understand correct that mean that the fastest prover can bring down the average time so much that only he would be able to submit proofs in a way that would not get slashed. After that happens, the most efficient provers cannot bid on the auction anymore because the given window would be impossible for them to achieve.

cyberhorsey commented 1 year ago

Agreed, this is the idea behind the configurable grace window, to make sure it's always somewhat realistic.

Thanks for the info on proof of proofs!

adaki2004 commented 1 year ago

I like the idea of the auction because in ideal conditions less math, less complex. Also saving (gas and resource) perspective, good if we have no or less waste.



I always highlighted the fears and worst case scenarios (or not worst case but ugly looking ones) of the current system (which fees going to 0 or ever increasing) - so I was trying to think about those concerns as well. 

 



Let’s say we have malicious actors in both of these models (current vs. auction), with kind of unlimited resources to attack the protocol. (Let’s say for some it worth couple mill do destroy something…)

 



 



In current tokenomics:
 



 



- You can propose rubbish blocks (by paying TKO (if non-zero:) ) + ETH gas) which spams the network and slows canonical chain down. Proof generation time for valid and invalid blocks are the same (according to John) so this is a valid concern here. 



 



Even in this case there is a race condition: who and when submits a proof, which is a healthy one - tho might waste resources. I don’t see any other (big) problem out of my head with this, because it is kind of net positive (quick and efficient proofs -> less fees - tho more participants could take part in this spamming if TKO is 0, but again, they have to pay ETH, so it will bigger bottleneck for them than the TKO is 0 or 1) 



 
In auction:


One argument I did not share, but now I do with Brecht is: we should not care how tokenomics look like - until it makes common / rational sense. I just feel it’s not a good argument because if the current system tanks the TKO to 0 (which kinda could seem ugly but), it means the network is efficient and cheaper to use, and we should not care if anyone is unrational and losing money on it by submitting proofs in a way it does not worth it for him/her. 
 


I still like the auction because it helps with wasting resources (for some provers) on one hand - but can it introduce other wasting (e.g.: on smart contract logic complexity costs) which will affect every end user not only some provers ? 


 Obviously there is more to it, just my thoughts. Let me know if any of this make sense or i’m completely blind on this.

Brechtpd commented 1 year ago

Agreed, this is the idea behind the configurable grace window, to make sure it's always somewhat realistic.

Isn't that basically just a fixed proof target with extra steps haha :)

I like the idea of the auction because in ideal conditions less math, less complex. Also saving (gas and resource) perspective, good if we have no or less waste.

In terms of complexity it's hard to beat basically 10 lines of code using code/formula we already have to use anyway! I can't imagine a more simple solution than the current system (well, a constant prover fee I guess :). It also has only one critical input parameter, the proof target time, which is easy to understand. The only other important one is the quotient that is just a damping factor. All the other ones are not important.

You can propose rubbish blocks (by paying TKO (if non-zero:) ) + ETH gas) which spams the network and slows canonical chain down. Proof generation time for valid and invalid blocks are the same (according to John) so this is a valid concern here.

It doesn't really spam the network that much because all these will simply be skipped, and only the actual gas used is taken into account for the L2 base fee, so it won't raise quickly. So I don't think anyone can really exploit the invalid stuff, even when trying to attack the network. Proof costs for some of these blocks containing invalid data can go from very cheap to the same as normal blocks.


In auction:

  • We did not eliminate the ‘spamming’ factor (for some malicious actors with the same intention and capital)
  • What if someone submits unbeatable bids (1gwei per gas) and never submits the block in time ? It means we need a window to elapse, then ‘free market’. This, pairing with spamming could slow down the network even more. 
Let’s say the required deposit is a high flat rate. High enough to dis-incentivize this kind of activity, which can be a solution to this problem, but does not eliminate the first.

People with money to waste can still mess with an auction system for sure. And if we make the required deposit big that can get slashed that means we make the entry cost to be a prover very high, which greatly reduces the permissionlessness/decentralization of the provers.

One argument I did not share, but now I do with Brecht is: we should not care how tokenomics look like - until it makes common / rational sense. I just feel it’s not a good argument because if the current system tanks the TKO to 0 (which kinda could seem ugly but), it means the network is efficient and cheaper to use, and we should not care if anyone is unrational and losing money on it by submitting proofs in a way it does not worth it for him/her. 


  • I see no reason why on a testnet the prover fee would not go to 0 with an auction system. Every system depends on people to follow the rules in a rational way. If people want to submit the proof no matter what (which will be the case when people speculate) the prover fee will go to 0.
  • Imagine having a prover fee system that makes the proof generation cost free for users and thinking it's bad!




I still like the auction because it helps with wasting resources (for some provers) on one hand - but can it introduce other wasting (e.g.: on smart contract logic complexity costs) which will affect every end user not only some provers ?

Yeah some pros, some cons. However, especially on a testnet where there's room for speculation (and maybe even on mainnet) I don't expect this to result in a different outcome with an auction compared to any other proving fee system that is dynamic.

dantaik commented 1 year ago

Following several attempts at designing and implementing Taiko's tokenomics, I've come to a few important realizations:

With these points in mind, I propose several metrics to evaluate tokenomics, arranged according to their perceived importance. I invite feedback and am open to further discussion.

  1. Built Around Taiko Token: The tokenomics should be constructed around our Taiko token, rather than other tokens such as USD stable coins. Ideally, proposer fees and prover rewards should reach an equilibrium, such that the supply of Taiko tokens remains consistent or decreases slightly over time. This structure ensures the long-term stability and utility of our native token.

  2. No Wasted Proving Resources: Provers shouldn't waste computational resources on non-profitable endeavors. If resources are wasted, provers will understandably expect higher returns from those proofs that are accepted, in order to compensate for their losses. This will inevitably increase the average cost of proofs.

  3. Preference for Cheaper Proofs over Faster Proofs: Ideally, proofs should be both cheaper and faster. However, if two proofs incur the same time delay, the cheaper one should win most, if not all, of the time. If two proofs' time delays fall within a certain upper limit, say, one hour, the cheaper should also be favored. By prioritizing cheaper proofs, tokenomics will encourage provers to optimize for cost, rather than speed, pushing our L2 transaction costs down.

  4. No Built-In PoS Reward: To avoid being classified as a security, our tokenomics shouldn't reward tokens to stakers. While a prover might build a staking-based reward system to allow token holders to delegate their power, this should not be seen as part of our tokenomics.

  5. Minimal L1 Cost: Our tokenomics should aim to keep the average extra cost per block on its base layer as small as possible. Provers will likely charge higher fees on L2 to cover this cost, potentially excluding lower fee transactions.

  6. Simple Decision Making: Decisions on whether or not to prove Taiko's blocks shouldn't depend on complex algorithms. The fewer the inputs for such decisions, the better.

  7. No Waiting When Proofs are Ready: Provers shouldn't be required to hold their proofs and wait offline for the perfect moment to submit. Such an obligation would necessitate them developing a small system to hold proofs and make intelligent decisions about submission times. This extra effort, compared to our competitors, could deter our adoption among provers. Therefore, our tokenomics should permit proofs to be submitted immediately after they're ready, without negatively impacting the block reward. This also means that block verification times can be shortened, as provers aren't obliged to wait.

  8. Straightforward: Our tokenomics must be easily understood, particularly by decision-makers and engineers within prover companies. These stakeholders should be able to distill the core concepts into a few succinct sentences that allow listeners to quickly grasp the main points. The simplicity of communication will facilitate comprehension and acceptance of our tokenomics.

I'm hopeful that these comparison metrics will guide our discussions, preventing us from veering into too much subjectivity.

dantaik commented 1 year ago

In an effort to facilitate our ongoing discussions on tokenomics, I've developed a comparison table based on the metrics we've discussed. You may observe that these metrics seemingly tilt towards the "Batch Auction" method. I wish to candidly recognize that this bias could be inadvertent, given that I conceived the "Batch Auction" approach prior to establishing these metrics. Hence, I am earnestly soliciting your valuable feedback and suggestions to refine these metrics, thereby ensuring their fairness and objectivity.

Metrics Alpha-3 Batch Auction PoS SomethingElse
Built Around Taiko Token
No Wasted Proving Resources
Preference for Cheaper over Faster Proofs
No Built-In PoS Reward
Minimal L1 Cost
Simple Decision Making
No Waiting When Proofs are Ready
Straightforward

Please note that the symbol "✓" signifies full compliance with a metric, "✕" denotes non-compliance, and "◎" indicates partial compliance. I encourage @Brechtpd to complete the remaining cells under the "PoS" column based on your understanding and evaluation of these designs.

For future discussions, I propose that we focus on each cell of the table, if possible. This approach will likely keep our discussion more focused and directly pertinent to each metric.

Brechtpd commented 1 year ago

Note that there is a very easy "upgrade" path from the Alpha-3 to solve the downsides (real or speculative). The only thing that has to be changed to turn it into PoS:

There's of course still some details left out here that are important and we need to think more how things would work out within the set out constraints. But I just want to show that most of these goals are not really specific to any prover fee system, the implementation details very much matter! The Alpha-3 implementation in many ways is also an auction, it's just that the auction part is done implicitly based on when proofs are being submitted, which removes a lot of the complexity of actually implementing a full on-chain auction system.

With these points in mind, I propose several metrics to evaluate tokenomics, arranged according to their perceived importance. I invite feedback and am open to further discussion.

Some other important ones I can currently think of:

Brechtpd commented 1 year ago

A rough sketch of how PoS could work. Only small changes necessary to the current proving system, and some parts get simpler:

Protocol:

POS contract:

I think that's it.

We may have underestimated the complexity and time investment required for this task. Having started a year ago, it seems we've essentially returned to square one.

I think we're pretty close to being done! I think the fees work in a reasonable way, the only thing left is to iterate and tweak the more we learn (also from partners).

Clearly, we need a set of metrics to effectively evaluate and compare different designs. A metric-based comparison approach could bring objectivity to our discussions and keep our focus on the essential aspects of various ideas, rather than on endless minutiae.

I would counter that a lot of the things that are really important are exactly these minutiae! A good example is the current proving system, the things about it that are now seen as not desirable are relatively small things that can be iterated on, no full overhaul necessary. Classifying these systems on a general level is still useful, but without taking into account some of the implementation details would mean that a lot of possible options and nuance are lost.

No Waiting When Proofs are Ready: Provers shouldn't be required to hold their proofs and wait offline for the perfect moment to submit. Such an obligation would necessitate them developing a small system to hold proofs and make intelligent decisions about submission times. This extra effort, compared to our competitors, could deter our adoption among provers. Therefore, our tokenomics should permit proofs to be submitted immediately after they're ready, without negatively impacting the block reward. This also means that block verification times can be shortened, as provers aren't obliged to wait.

Realistically they will always withhold proofs to optimize for gas costs as well. Provers will (and should) take whatever time is allocated to them to submit their proof only when L1 gas fees are as low as possible. So I think waiting is inevitable, and a generous window for provers to submit their proof is a good thing to minimize costs everywhere.

dantaik commented 1 year ago

I changed a ✕ to a ✓ for Alpha-3 for "Preference for Cheaper over Faster Proofs" because that fee system also selects the most efficient proof for the target proof time (exactly the same as the auction implementations given above, though there are ways to do auction without it of course).

It's really hard for me to follow here. Lets say Alice can generate a proof in 30m and her cost per proof is $1, Bob's time delay is 20m and his cost is $1.1. Now Bob can always submit his proofs between 20-30m to beat Alice, right? So Alice know she will always lose, so she will not join the competition. That means potential cheaper proofs do not prevail.

For "No Wasted Proving Resources" for Alpha-3 I think in practice could also be a ✓ because of off-chain communication between professional provers who will decide among themselves who will prove which block (because non-professional provers won't be able to compete anyway for now, but in the future when proofs may be able to be generated on normal desktops there may still be some unexpected competition). I think it's a realistic scenario, but it's only speculation so a ✕ is fair.

So Alice now has to find a BD person to build on relationship with existing provers? She is not going to prove blocks without talking to other people? Doesn't talking to other provers means worse decentralization?

dantaik commented 1 year ago

Isn't PoS the same as the rich ones always wins?

Assuming Alice stakes a lot of tokens and now can prove 10% of all blocks, but her proofs are on average 100% more expensive than others, she is not going to take a loss in most of her blocks, so rationally she will simply include only transactions that can make her profit and end up excluding transactions that will otherwise be included if she can lower her proof cost by 50%. I think this effectively increase the network's overall transaction fees as those 10% of Alice blocks now have higher L2 transaction fees than others.

dantaik commented 1 year ago

Maybe we should prepare two branches with different tokenomics. When we launch our mainnet, we can have one token but two L2s, each with a different tokenomics. If we feel like this current one is good to keep, keep it. I'll start an implementation based on the auction idea.

It takes less time to write the code than to debate here.

hugo-blue commented 1 year ago

We have help some provers to run in previous Testnet of Taiko. It is good to see this discussion and I have learned a lot. We understand Taiko's desire to make cost a higher priority to make Taiko more competitive.

Based on the current a3 economic design of the contract codes and the auction discussion, here are some comments from the provers' perspective, hope it will be helpful: For A3:

  1. Basically, it is a mechanism that we did not know before.
  2. There is some dilemma for provers: submit in time or wait? There is still some game theory to be applied. However, we don't want to focus too much on game theory. That is not the core competitiveness of the provers and not benefial for the zkp ecosystem, as the whole indutry requires faster and cheaper zkp proving cost. In the meantime, speed or cost may be a competitive factor we can work on. So a clearer strategy for provers would be better than a dilemma or waiting for submission, which may discourage provers.

For Auction:

  1. Based on the principle of cost-effectiveness, it is very likely that the best cost-effective provers will dominate the reward. Therefore, it would be desirable if there could be some randomness to encourage more provers to participate.
  2. To encourage hardware vendors to optimize zkp proving performance, which is good for the whole zkp ecosystem, the target proof time should be gradually reduced. That is, vendors should still be encouraged to optimize, even if it is not the most important thing.
  3. I am wondering whether we could set a minimum reward which can be gradually reduced threshold to avoid excessive competition and malicious bidding. If there is excessive competition, the provers may consider the market totally unprofitable and be discouraged .

It is really a challenge to develop a perfect economic mechanism. I admire all the work that the Taiko team has done. We are looking forward to the final design.

Brechtpd commented 1 year ago

It's really hard for me to follow here. Lets say Alice can generate a proof in 30m and her cost per proof is $1, Bob's time delay is 20m and his cost is $1.1. Now Bob can always submit his proofs between 20-30m to beat Alice, right? So Alice know she will always lose, so she will not join the competition. That means potential cheaper proofs do not prevail.

I think this has been explained quite a lot already because it's the core mechanism of the current design. Bob can only win by submitting the proofs earlier, but that will decrease the basefee and lower the prover fee and future rewards. So if Bob's proving cost is higher he will lose money by doing so. That is not sustainable for Bob, but is actually a good thing for the network because it reduces the prover fee! This is why the prover fee was going to 0 on the testnet, if provers just submit their proof as fast as possible the prover fee keeps going down.

So Alice now has to find a BD person to build on relationship with existing provers? She is not going to prove blocks without talking to other people?

They only have to do so if they are worried about wasted prover work. Otherwise automatically the most efficient should prevail, it will just take a while to settle down and some will lose money in the progress. Everything is pointing in the direction of there only really be a couple of provers that will do the work anyway.

Doesn't talking to other provers means worse decentralization?

I don't really care about actual prover decentralization, the only thing that matteres is getting that 1proof/block in a reasonable time for a low price. Liveness and low fees, in a permissionless system, I think is the actual goal.

I think you may be surprised that this is not unlike how 95% of blocks are built on Ethereum!

Isn't PoS the same as the rich ones always wins?

It's important that it's a delegated PoS, not a simple richest prover decides all. The Taiko token will already be used to control much more important things than selecting a short period in which a prover has the sole right to prove a block.

Assuming Alice stakes a lot of tokens and now can prove 10% of all blocks, but her proofs are on average 100% more expensive than others, she is not going to take a loss in most of her blocks.

How would that work in practice if blocks are assigned randomly (but sure, deterministic in advance)? Other provers should randomly have very similar costs, especially when taking into account that gas costs will be significant part. I think in practice the actual proof generation cost variance will be low, and we also control how much that variance is based on the circuits we offer.

so rationally she will simply include only transactions that can make her profit and end up excluding transactions that will otherwise be included if she can lower her proof cost by 50%.

Provers don't decide which transactions get included in blocks?

I think this effectively increase the network's overall transaction fees as those 10% of Alice blocks now have higher L2 transaction fees than others.

All blocks will have the same prover fee, the same as in the current design. If Alice only has 10% of the stake, Alice cannot increase the prover fee, a >50% consensus of staked TKO is necessary for that.

Maybe we should prepare two branches with different tokenomics. When we launch our mainnet, we can have one token but two L2s, each with a different tokenomics. If we feel like this current one is good to keep, keep it. I'll start an implementation based on the auction idea.

Fair enough.

It takes less time to write the code than to debate here.

Isn't that how ~9 months of the 10 months were spent on this? It feels like this is the opposite conclusion of what the past has learned us! From your feedback above I think more discussion is necessary, my thinking of how things work are significantly different from yours and probably other people.