taikoxyz / taiko-mono

A based rollup. 🥁 🌸
https://taiko.xyz
MIT License
4.54k stars 2.18k forks source link

On Tokenomics #14

Closed dantaik closed 1 year ago

dantaik commented 2 years ago

Roles

Here I chose to use proposers instead of sequencers because in our rollup contract there is nothing related to ordering transactions in a block. Given that Ethereum uses PBS (proposer builder separation), we should probably also use proposers and builders for clarity. The builder-vs-proposer economics is beyond the scope of our tokenomics design.

Prover Market

The need for ZKPs will inevitably create a market where only the highest bids receive their desired proofs. Such a market will serve as a foundation for almost all ZKP projects that are fulfillable by the prover's services/hardware.

We may choose to or have to build our own prover market, and other projects may also do the same, but eventually, there will be project-independent markets that try to maximize the prover's financial return and minimize the proposer's cost. (It is very similar to DSP, or demand-side platform in web2 ads ecosystem). Such a market will most likely operate off-chain but offer on-chain verifications of off-chain deals.

INFO: A demand-side platform (DSP) is a type of software that allows an advertiser to buy advertising with the help of automation. Because they allow mobile advertisers to buy high-quality traffic at scale with minimal friction, DSPs are a powerful marketing automation tool.

It's safer to assume that there will be multiple such markets, and we want to allow our provers to choose their favorite markets and use the deals they have reached off-chain with third-party provers in our protocol. We need to support custom deal adapter, as illustrated below:

function proposeBlock(
 BlockContext memory context,
 bytes calldata txList,
 bytes calldata zkDeal, // new field
 address zkDealAgent, // new field
)
 external
 nonReentrant
{
 // now we need to check that the agent is whitelisted by our protocol
 checkDealAgent(zkDealAgent);
 
 // then we check the deal is still valid and may do some bookkeeping
 // for example, lock the penalties for XX hours.
 IDealAgent(zkDealAgent).verifyDeal(zkDeal);
 
 // Deposit proving fees
 taiToken.transferFrom(…);
 …
}

  Once a deal is verified, a deal-prover is attached to this pending block before a deadline is reached. If the deal-prover proves the block before the deadline, the deposited proving fees will be transferred to the prover once the block is finalized, and the deal is consumed entirely or partially (a deal may serve multiple blocks).

If the deal-prover failed to prove the block before the deadline, the block is open for all provers. If another prover submits a ZKP successfully, a large amount of penalty (also specified by the deal) will be transferred out of the deal-prover's staking. These tokens will be used for 1) paying the actual prover an amount that's higher than the fair market price, and 2) putting into our DAO for other purposes.

A deal is something like this:

struct Deal {
 address prover;
 bytes32 contextHash;
 address feeToken;
 uint256 fee; // in TAI token
 uint256 deadline;
 uint256 penalties; // in TAI
 bytes extra;
 bytes signature;
}

TAI Tokenomics

TAI will be minted when:

related to this idea: actor.createLayer2(theirOwnTokenAddressOnL1) to clone our canonical L2 to their own L2 such that the native fee token on their L2 is the given theirOwnTokenAddressOnL1 token.

Brechtpd commented 2 years ago

I would be hesitant with adding restrictions to proposeBlock and the block it creates. Having the proposer choose the prover for its block may also not be optimal with what we're trying to optimize for (profit maximization vs decentralization and fastest finalization offchain/onchain). I also believe having ETH as the network token has a lot of benefits.

My current idea about this is pretty different from this so I think probably easier to just write it down here so we can compare/discuss.

Proposers

The main goal here is decentralization. In general I think the fewer restrictions on block propositions the better, so onchain probably no direct link between the proposer and the prover.

If we go for maximum decentralization the block proposition is the most critical part, so we would only want the block proposers to have ETH. ETH is the most decentralized way for payments and is already needed to be able to do the L1 tx, requiring another token on top of ETH to pay for other stuff can only make the decentralization worse. With provers being more of a work for hire in the system paying them mainly in something that resembles money (or is money, source: memes) (or at least the closest we can get) makes sense.

And so the proposer will have to lock up some additional ETH onchain when proposing the block as payment to a prover. The amount of ETH required for the block is just to cover the costs for the prover and so needs to be approximated somehow. For now the best way we can do that I think is by using the max gas cost (provided by the proposer himself but later verified) and use that the calculate roughly what the expected prover cost will be for the block. However we don't really know how much the prover cost is per gas (which will keep changing as provers get more optimized) so we need an additional mechanism to deduce this somehow (see below). We could also charge a fixed cost here and repay the sequencer afterwards when we know the ~actual cost but that makes it harder for normal users to submit their own transaction to the queue if the cost is thigh and so censorship resistance suffers. Once the proof is submitted we can refund the proposer against the actual gas used in the block (but we only know the actual gas amount when the block is proven and finalized).

Proposers can get a block reward, but it'll have to scale with the amount of work (gas) actually done in the block. The only purpose is to subsidize proposal costs.

Rate limiting

As an L2 we don't have infinite bandwidth for transactions, and we still want people to be able to run an L2 pretty easily. Unlike an L1 however there isn't really that much of a problem with very high loads over pretty long times, we mostly care about the average load over a relatively long time.

We should implement an EIP-1559 like mechanism that imposes an extra cost on proposers depending on the current L2 load. This cost could be called something like the network utilization cost or something like that.

And so there are three costs for proposing blocks (all paid in ETH):

In return the block proposer receives

Builders/Provers

At this point there are blocks in the queue onchain, each having some ETH locked up to pay a prover who is willing to prove the block.

The main goal here is getting the block finalized as fast as possible for the lowest cost. So there is a balance to be made here between those two goals, and it would be great if we could easily tweak it and let the market do its thing. Similar to L1s we can use the block rewards for this.

After the block is proposed we start increasing the block rewards for this block. So the older the block the higher the block rewards for it will be (and so automatically also gets the oldest block proven). The two important parameters here are

This mechanism also ensures that we only have to roughly estimate the proving cost for the proposer. If we underestimate the cost than the difference will be made up by the block rewards (and so the prover only has to wait a bit longer before starting/submitting the proof).

Option 1. Battle Royale

In the easy case any prover at any time can submit a proof for the block. This automatically rewards the most efficient prover, but of course this can be pretty wasteful because other multiple provers are doing the same work but they can throw away their work if they aren't first. However this also has some nice benefits. Multiple provers working on a proof provides some nice redundancy as well so there's no need to depend on a single prover at any time to be able to submit a proof within some time frame. Similarly to Ethereum we could even give a block reward to these "uncle" proofs that prove an already proven block (this reward would decrease quickly in time in function of when the first proof was submitted, and would also decrease the more of these proofs are submitted).

Option 2. Block auction

Instead of a race to get the proof first we instead auction the block off onchain. The same mechanism with increasing block rewards is used, but we now have an additional parameter in the form of the allowed time to generate the proof which also increases over time (so over time the block reward increases as does the allowed prover time).

And so now the prover only needs to buy the right to be able to prove a block first, no need to waste any computing power (though the prover could still optimistically start proving earlier). At the time the prover buys the block the auction stops and the prover posts some collateral. Only this prover can now generate the proof for the block and submit it onchain. If the prover it too late the auction is restarted and the prover loses his collateral (this extra collateral could be used as an extra reward for the next prover so it's done as fast as possible).

Prover cost paid by the proposer

So how much ETH does the proposer need to pay to the prover per gas? We can use the block rewards we have to give for each block over some period of time as an input:

dantaik commented 2 years ago

I think there are a few things we need to talk about first, they are our design assumptions.

  1. Whether the proving time is linear to a block's actual gas cost. If this is true, then we can reward prover based on the block's gas limit (block proposer will use a more accurate gas limit in his block to reduce the prover cost he will have to pay).

  2. Whether it is possible to generate multiple zk-proofs for the same block based on the same parent. If this is true, we can reward uncle-proofs, otherwise, we cannot.

  3. ZKP miners(provers) will monitor multiple chains and select the most profitable tasks. I believe this is true because a single chain may not offer sufficient demand to make good profit. Unlike Bitcoin and Ethereum, they have predictable block time so there are always blocks to mine; if other L2s design like Taiko, then there will only be blocks to prove if there are enough user transactions.

  4. a block's prover fee + a time-sensitive block reward is competitive enough (comparing to other chains) . If this is not true, then it is possible that a block will never get proved. If our token price is not doing good, and if we apply a max block reward, then prover fee + maxReward may still be small and the chain will stuck.

Brechtpd commented 2 years ago
  1. Probably not in the short term, but the aim should definitely be that they roughly match in the long term. Even adding something like the "prover oil cost" that tracks the actual prover cost in parallel with the gas cost would still mess up the fee market. So in the short term we would basically subsidize the incorrect pricing of certain operations by using the block reward system (but if this gets actually exploited this will eventually increase the block proving cost/gas for the proposers so the system should protect itself from this automatically so it doesn't keep depending on those block rewards).
  2. We can do something like allowing the prover to set his own address somewhere in the public input. This input would have no side effects, but it would still be part of the circuit proof (so changing it would require regenerating the proof). This makes sure that the proof is unique while at the same time preventing the theft of the proof (payment is always done to that address regardless of who submitted it). (In a proof with zk the generated proof will always be different for all provers no matter what, but without zk I think the proof may always be the same if the exact same thing needs proving but I'm not completely sure of that).
  3. I also think this will be true.
  4. Definitely something that will need some tweaking to find robust numbers that achieve what we want. I do believe this is very unlikely to be a problem even with suboptimal parameters. If we're okay with theoretical unlimited inflation to keep the chain going the max block reward could even be set in USD and then converted that to TAI using a TAI/USD price oracle. But I don't think we'll need to go that far, as long as the chain isn't completely dead (i.e. no users and no new bocks) it should keep going regardless. Anyone with an interest in the chain chugging along (our team but even other dapps deployed on the chain for example) will want to prove blocks when necessary for other reasons than just making money directly from the prove. Even normal users that want to exit the rollup will want to prove the blocks at a cost. There is also no risk with a block remaining unproven for a long time so there's plenty of time for people to organize to be able to do so.
dantaik commented 2 years ago

Rearding #2, I created this issue to track a new requirement in our zkEVM. https://github.com/taikochain/taiko-mono/issues/22

johntaiko commented 2 years ago

In return the block proposer receives

  • The transaction fees (ETH)
  • Block reward (TAI)

If the TAI only be used in Block reward. I have doubts about the liquidity of tokens and it's price. This can lead to no one proposing.

Brechtpd commented 2 years ago

Not sure I understand your concern here, could you explain a bit more? For proposers the block reward isn't really important because they can demand a higher tx fee from users.

johntaiko commented 2 years ago

For proposers the block reward isn't really important because they can demand a higher tx fee from users.

Oh, I got it.

dantaik commented 2 years ago

I'm trying to model the block reward. Please see the image below.

WechatIMG235

The image shows that on the time-axis we have 3 phases:

Brechtpd commented 2 years ago

I think this is pretty good, largely make sense to me. I guess the only part that is a bit hard is the part between Avg and 4*Avg. If a proof takes longer to proof than expected it could mean:

  1. Provers aren't interested in proving the block for some reason for the expected average reward (e.g. hardware is used to generate proofs for another use case because more profitable)
  2. Provers are waiting on a higher block reward before they submit their proof
  3. Provers are still doing their best to get the proof onchain as soon as possible, but the block is just takes longer to prove than other similar blocks (which can happen in the short term because of gas/prover cost miss pricing)
  4. Other reasons?

For 2) and 3) at least I think the low reward phase doesn't really help except potentially delay when the proofs are submitted. So perhaps a parabola (or just two lines resembling a parabola) with the (non-zero) minimum at 1.25*Avg may more sense if the above is true?

We also need to be able to find out what how much we have to charge the proposers for the proving cost, just from looking at when proofs get submitted on this curve. Easy enough to do when you just have a linear increasing line with some starting point (average block reward higher than some target -> increase prover cost for proposer), but this type of curve makes that more difficult to think about.

When provers get cheaper/faster then the block reward is a higher at first but Avg then also decreases and so the block reward after some time decreases to R again and so proposers don't end up paying more. But if provers get more efficient than either the prover cost paid by the proposer should come down (or the block reward should be lowered) automatically.

If a block gets proven between 2*Avg to 4*Avg it's not so clear what we need to do. The prover cost paid by the proposer might be sufficient or the prover may be willing to submit the proof he already calculated at a loss to recuperate some of his costs because otherwise he may not earn anything from it. The same if of course also true in other scenario's, but then everybody always tries to submit as soon as possible so automatically only the most efficient provers survive.

Probably a way to find good solutions for these, but it may be a bit more complex, these dynamic things are tricky to get right so the easier the better.

Brechtpd commented 2 years ago

I want to share my thoughts about using TAI as the gas token on L2. I thought we were going to use ETH so didn't think this was a point of discussion.

I think it's not a good idea, unless I'm missing something. Polygon did the same with their PoS sidechain and they also said it wasn't a good decision and I believe they said somewhere it will be ETH for their newer chains. It's an important decision and you can only make it once (well, per chain :).

I guess my main question is what is the benefit of using TAI as the L2 gas token? I can only see downsides. I think the classic example holds here: you don't buy an iphone using apple shares.

In the short term the downside is unnecessary friction for users. Most users will still be Ethereum L1 users and will already be used to using ETH as gas. These users already own ETH, and using ETH on L2 as well make things a seamless experience. Seems a bit crazy to me to force people to own a special token to be able to do transactions, something we will have spend time educating users on while we and our users have better things to do. Technically we're trying to stay as close as possible to what developers and users already know and then we'd do something like this which seems counter intuitive to me.

In the long term almost everybody will be using some kind of account abstraction so the gas token doesn't really matter for normal users (as it should). Users will probably pay for transactions in some stable coin to relayers that do the transaction for them. So in the long term the gas token will be owned mostly by relayers and some other educated users, not normal users. But even in this case the relayers will just buy enough tokens to pay for the gas they spend. This is basically paying for a service and so to me it makes sense that even relayers own and pay in something that resembles money as close as possible, ETH.

So both in the short and long term I believe ETH makes a lot of sense. Users will also likely be moving between rollups frequently and each having its own gas token would be less than ideal and a good reason to avoid certain rollups.

We're creating an L2 so the thing we're really selling is blockchain space which we will already be doing in the form of the network utility fee (or whatever you want to call it). I don't think we really need to do anything more. I don't think we need to confuse things with what is an investment and what is simply paying for a serve. L1s don't have a choice they have to use their own token, but for L2s I believe there are better options.

kirataik commented 2 years ago

We're creating an L2 so the thing we're really selling is blockchain space which we will already be doing in the form of the network utility fee (or whatever you want to call it). I don't think we really need to do anything more.

Agreed. This is exactly what I thought. We can add Tokenomic later, after our L2 gets really popular.

dantaik commented 2 years ago

We're creating an L2 so the thing we're really selling is blockchain space which we will already be doing in the form of the network utility fee (or whatever you want to call it)

I'm not convinced to use TAI as the L2 transaction fee token myself neither, so I'm glad you bring that up. Do you think we should use TAI as the network utilization fee token?

BTW, you mentioned Polygon, do you remember where they talked about their decisions retrospectively?

Brechtpd commented 2 years ago

Do you think we should use TAI as the network utilization fee token?

I think it also should be ETH so all fees are paid in the same token. Like tx fees I don't think there's any benefit of having TAI there that block proposers have to pay, only makes things more complicated (and at worst less decentralized). Also having the revenue for the DAO as ETH directly seems nice.

BTW, you mentioned Polygon, do you remember where they talked about their decisions retrospectively?

I found this with some explanation (in the quote tweet), the OP also confirms it will be ETH: https://twitter.com/sandeepnailwal/status/1550009537406574593. There's more but can't find it currently.

dantaik commented 2 years ago

Regarding proverFee, network utility fee, and block reward.

In my previous proposal above, the block proposer needs to lock in a deal with a prover so his block is guaranteed to be proven. With Brecht's feedback, I think we can consider the Taiko protocol on L1 as a middleman to facilitate such a deal. Now the deal is among the Proposer, the Prover, and the Protocol:

The Proposer would like to propose a block and wish the block is guaranteed to be proven. So the Proposer enters the a deal with the Protocol by paying a up-front fee to the Protocol. The Protocol uses its best knowledge (stats) to provide a fee quotation to the Proposer, but the up-front fee may or may not be sufficient enough to attract the Prover to work on the block's ZKP:

In both cases, the Protocol can know what's the actual prover fee paid to the Prover so next time the Proposer wants to propose a new block, the Protocol can provide a much accurate quote.

But if too many proposers suddenly come at once and ask for quotes, the Protocol will increase the quote by a small percentage, because now it is the seller/prover's market. Otherwise, if there are very few proposers to propose block, then there is no overcharge or maybe even a small discount. This is as effective as the network utility fee.

All of the above assumes there is only one protocol token as the fee token, not Ether. Otherwise, the math will be really difficult for the Protocol, as the Protocol doesn't know the conversation rate between Ether and the protocol token.


My suggestion: all these fees shall use TAI token, not Ether. It will greatly simplify the design, and it also simplify the prover's math -- they receive TAI as block reward after all so they have to care about TAI's price.


I don't think requiring block proposers to have TAI is a problem. Individual users proposing a block shall be really rare, and if we want to support it ,we can create another contract that help the user to buy TAI with another token on Uniswap then propose the block with the purchased TAI.

Brechtpd commented 2 years ago

If the fee is sufficient: the ZKP will be generated and the block will be proven. The Protocol will take x% as a deal fee and burn it.

Will this percentage be dynamic? From reading your explanation I get the expression it isn't because otherwise it's very similar to the split network utility fee/prover fee approach.

I do think you have to see them as separate (and so the percentage needs to be dynamic in your approach if you want to look at it this way) because they pay for very different things. High network activity does not automatically translate in more expensive proofs, low network activity does not automatically mean cheaper proofs. One is paying for using the network which will be very dynamic probably (L2 node load etc..), the other is paying a much more fixed cost (but still ever changing) for just generating a proof (in some cases the proving cost can be 50% of the total cost, in other case it may just be 10% of the total cost).

All of the above assumes there is only one protocol token as the fee token, not Ether. Otherwise, the math will be really difficult for the Protocol, as the Protocol doesn't know the conversation rate between Ether and the protocol token

Is this a problem? It's not like we can really accurately predict the prover cost because there's still quite a few variations that will be different for each block (not just on the offchain prover cost side and the race to be first, but also the L1 gas cost to submit the proof and the possible gas wars for it) so the estimated cost will have to gradually shift each time towards a target cost point that is always changing.

I don't think requiring block proposers to have TAI is a problem.

I also don't think it would be the biggest problem, but still a bit of a nuisance because block proposers now need to make sure they have enough ETH and TAI to be able to submit a block using their hot wallet instead of just ETH. We would also have to make absolutely sure that TAI is always available somewhere (and not on our own rollup, would almost have to be on Eth L1 to not have any other external dependencies) with enough liquidity or the decentralization and security of the rollup falls apart. So it seems like this would definitely create extra complexity outside of the core protocol (especially the TAI availability on L1 is a bit tricky I think), and has some impact on the security of the protocol though of course either a negligible or very small impact, but an impact nonetheless.

A simpler core protocol of course almost always the better option if we can do so, but here the tradeoff isn't an obvious win I believe. It kinds of shifts some of the complexity externally and so it's harder to control and think about vs internally where we have full control and less unknowns.

dantaik commented 2 years ago

You previous said "As an L2 we don't have infinite bandwidth for transactions", yes, but L2 is supposed to have more bandwidth than L1, so if there are too much load, they should be automatically throttled by L1 as block proposal happens on L1, isn't it?

Image two deployment of the same zkRollup protocol, one incentivizes provers better, so it can handle a lot of loads and the average pending/unproven transactions is very big, the average finality delay is very small; while the other one only has a very small number of pending blocks but proving/finality delay is long. I think we cannot claim the first one is more overloaded than the second one by comparison.

I'm having a hard time to come up with a clear definition of the network utilization fee.

Brechtpd commented 2 years ago

You previous said "As an L2 we don't have infinite bandwidth for transactions", yes, but L2 is supposed to have more bandwidth than L1, so if there are too much load, they should be automatically throttled by L1 as block proposal happens on L1, isn't it?

That's right, the key metric here would be the total amount of gas used on L2. This is the metric how hard it is to keep an L2 node fully in sync. Similar concerns here on L2 as why the gas needs to be limited on L1 (with the main one being state growth).

Image two deployment of the same zkRollup protocol, one incentivizes provers better, so it can handle a lot of loads and the average pending/unproven transactions is very big, the average finality delay is very small; while the other one only has a very small number of pending blocks but proving/finality delay is long. I think we cannot claim the first one is more overloaded than the second one by comparison.

I don't see the ability to finalize a lot of blocks by the provers as the ability to be able to handle a higher network load. The provers have the job to help with finalizing the block onchain, but in the end this has almost no impact on the actual L2 chain. The only thing that allows is that smart contracts know what the L2 state is (but the L2 network already knows it).

Because provers can scale offchain pretty much without constraints the amount of blocks that need to be proven should have practically no impact on their cost (unless submitting all these proofs onchain starts saturating the L1 chain, but that's an unrelated thing). So I think we can see the prover cost on a per block basis (based on gas used in the block), with the only parameters being how fast we would like the block to be proven and how much that ends up costing for the block proposer (and the block rewards).

I'm having a hard time to come up with a clear definition of the network utilization fee.

I think looking at an optimistic rollup may make things easier. And a zk rollup is practically the same as an optimistic rollup with validity proofs . Both an optimistic rollup and a zk rollup (that do the same thing) have exactly the same blockchain capacity. It's not because an optimistic rollup doesn't have any provers that the L2 network capacity goes to infinity.

In our case we have to make sure that anybody can run an L2 because they may have to be able to prove their own transaction , or find somebody that is willing to do it for them, that lets them exit the rollup. Basically provers need to be able to join the system with reasonable hardware. Also general infrastructure should be able to keep up like block explorers. So unlike an L1 it's not like everybody needs to be able to verify the transactions all the time with a reasonable computer, so L2 gas limits can generally be a lot higher. But not too high.

Brechtpd commented 2 years ago

I found thinking about the network fee and the proving fee like an AMM very useful: https://ethresear.ch/t/make-eip-1559-more-like-an-amm-curve/9082.

I applied this idea for both fees. For the network fee it's just like you'd expect and described in the link above (though a bit differently because no fixed block times). There's a gas target and an AMM is found to correctly price this gas. For the prover fee here's a block reward target for the proof and depending on how much block reward is used we go up or down the price curve for the prover fee (i.e. the prover basefee is dynamically changed depending on the blockreward use, which is exactly the same mechanism required for changing the network fee depending on gas use). Proposers/provers "buy" things from the protocol (network capacity/prover rewards) and the protocol "sells" the predefined target amounts (the network capacity), and the price changes depending on the buy and sell pressure.

The calculations are very simple as you can see below, though I'm sure there are issues with the exact implementation details.

import math

ETH_BLOCK_TIME = 12
GAS_TARGET = 5000000                 # target L2 gas (per ETH_BLOCK_TIME seconds)
ADJUSTMENT_QUOTIENT = 32

PROVER_REWARD_TARGET_PER_GAS = 0.1     # TAI/GAS in block rewards to prover
PROVER_TARGET_DELAY_PER_GAS = 0.001  # TODO: change to something dynamic probably

time = 0

# network fee
gas_issued = 0
last_time = time

# prover fee
basefee_proof = 0
blockreward_issued = 0

def eth_amount(value, target):
    return math.exp(value / target / ADJUSTMENT_QUOTIENT)

def network_fee(gas_in_block):
    global gas_issued
    global last_time

    gas_issued = max(0, gas_issued - GAS_TARGET * ((time - last_time)/ETH_BLOCK_TIME))
    cost = eth_amount(gas_issued + gas_in_block, GAS_TARGET) - eth_amount(gas_issued, GAS_TARGET)
    gas_issued = gas_issued + gas_in_block

    last_time = time

    return cost

def update_basefee_proof(gas_in_block, block_reward):
    global blockreward_issued
    global basefee_proof

    blockreward_issued = max(0, blockreward_issued + block_reward - PROVER_REWARD_TARGET_PER_GAS * gas_in_block)
    basefee_proof = eth_amount(blockreward_issued/gas_in_block, PROVER_REWARD_TARGET_PER_GAS) / (PROVER_REWARD_TARGET_PER_GAS * ADJUSTMENT_QUOTIENT)

    return basefee_proof

def prover_fee(gas_in_block):
    return gas_in_block * basefee_proof

def block_reward(gas_in_block, delay):
    # TODO: probably something else than this
    return PROVER_REWARD_TARGET_PER_GAS * (delay / (PROVER_TARGET_DELAY_PER_GAS * gas_in_block))

def propose_block(gas_in_block):
    print("network fee: " + str(network_fee(gas_in_block)))
    print("prover fee: " + str(prover_fee(gas_in_block)))

def prove_block(gas_in_block, delay):
    block_reward = block_reward(gas_in_block, delay)
    print("block reward: " + str(block_reward))
    update_basefee_proof(gas_in_block, block_reward)

EDIT: Updated code a bit to confirm with the correct basefee calculations for the prover fee.

dantaik commented 2 years ago

If we deploy our protocol twice as rollup A and rollup B, and if somehow A always has big blocks (gas limit being close to the theoretical upper limit) and B always has very small blocks, gas limit close to the lower end. Do we prefer rollup A or B, or something in the middle? I ask this question because I'm still struggling with the idea that GAS_TARGET makes sense.

I know with bigger blocks, it take more time for L1s to sync the client because the L1 node can trust no one and has to sync from genesis. But is it really a problem for zkRollup, since a rollup's L1 contract can be used verify if a block is recent and finalized, so the L2 client can download a snapshot from peers and be certain about its validity.

dantaik commented 2 years ago

Can PROVER_REWARD_TARGET_PER_GAS be less than or equal to 0?

Brechtpd commented 2 years ago

If we deploy our protocol twice as rollup A and rollup B, and if somehow A always has big blocks (gas limit being close to the theoretical upper limit) and B always has very small blocks, gas limit close to the lower end. Do we prefer rollup A or B, or something in the middle? I ask this question because I'm still struggling with the idea that GAS_TARGET makes sense.

Generally A because it should be cheaper, as long as blocks are limited in size somehow. I don't have any other concerns.

I know with bigger blocks, it take more time for L1s to sync the client because the L1 node can trust no one and has to sync from genesis. But is it really a problem for zkRollup, since a rollup's L1 contract can be used verify if a block is recent and finalized, so the L2 client can download a snapshot from peers and be certain about its validity.

Agree on all, it just depends on what assumptions you want to introduce. A dumb example, but let's say you want to get your ETH out of the rollup. To create a valid transaction that does this you need to know your current nonce and how much ETH you have. So you need to get up to L2 data from somewhere so you need to depend on somebody that can keep an L2 node in sync. In this example the data is static so not so difficult. If your money is stored in a smart contract than creating a valid transaction might be time sensitive so you may need an actual in sync L2 node with the latest state for this. Throughput can be extremely high if you assume some super computer will make this data available for you and you can count on it being there. However, if that's not the case you may be out of luck and you're own your own to sync all the data.

Some limitation is required no matter, how you do it is up for debate. But if there's a limit, there is congestion so we have to deal with that. The benefit of using GAS_TARGET/eip 1559 I feel is that it handles that for us and at the same time can redistribute the excessive fees. GAS_TARGET will normally be high, so the L2 block space should be cheap most of the time regardless of how we choose to do this.

EDIT: BSC has been running at 5x the Ethereum gas limit I believe and had a lot of trouble getting their nodes running. So even very capable hardware will run into issues at some point.

Brechtpd commented 2 years ago

Can PROVER_REWARD_TARGET_PER_GAS be less than or equal to 0?

We need to be able to detect that the current prover base fee needs to go down. So normally some non-zero value would be required because we have to be able to go down the curve (need to detect that a lower block fee is also okay). However in that case perhaps s proof delay target can be used instead, so a ETH <-> Delay with a PROOF_DELAY_TARGET_PER_GAS targe (longer delay -> higher fee, shorter delay -> lower fee). Block rewards could be given after the target delay has been exceeded. Or something like that, don't know if that would work well but seems like it could.

In any case I don't really see that much of a problem with a small non-zero block rewards (couple of dollars should be good enough). I don't think these block rewards need to be big, creating these proofs shouldn't be that expensive and should easily be possible by renting hardware on AWS for example. Doesn't necessarily mean we'll get the proof in the shortest time like with special hardware, but I don't think we would want to help subsidize that because in the end it's not important enough. It's not like a POW L1 where a lot of miners are doing a stupidly amount of calculations and this is actually beneficial to the chain. For us we just need some people creating proofs for some blocks in a reasonable time for a reasonable amount of money (cost + small profit, so maybe in the order of 10$-50$/block proof depending on the L1 gas cost?).

dantaik commented 1 year ago

This issue is now closed but linked from this issue where some follow up discussions may happen.