kindelia-archive / Kindelia

A minimal decentralized computer.
MIT License
122 stars 5 forks source link

Consensus and Miner Incentives #2

Open slightknack opened 2 years ago

slightknack commented 2 years ago

Stumbled across this repo the other day and I absolutely love it! I just have a few of questions about miner incentives and how they relate to transaction evaluation. I'm asking these questions because I think the project is cool and would like to understand it better. Thanks for taking the time to take a look at them!

Transaction Evaluation Incentive

If the transaction is an eval() block, then an user can just include a payment to the miner on it.

First question: How can a miner know the reward they earn associated with a transaction before evaluating it? For instance, say I have this block:

eval Bob(signature) {
  CatCoin.send(@Alice, 1000)
  Work.doComputationallyExpensiveThing(1000000)
  CatCoin.send($block_miner(), 10000)
}

How can the miner verify that they get a massive reward by including this transaction in a block, without having to do the work before evaluation?

The evaluator may not be able to rely on the fact that the line CatCoin.send($block_miner(), 10000) is included.
What if `Bob` is out of money after sending to `Alice`? Even if `Bob` still has money, `Work.doComputationallyExpensiveThing` could send more money to others. If a sub-transaction fails, how can the user guarantee that the rest of the bond also fails? (Or are bonds not atomic in this sense; would this have to be implemented manually?) The miner could technically look at the set of bonds each function touches to determine which expressions don't intersect and can be evaluated in parallel, I suppose, but the issue still exists. Take this following function that interleaves both work and rewards:

Even if the miner determines they will earn the reward by mining the block, say we have an inductive function that interleaves work and rewards, and only in the base case does it send the block miner a large reward:

bond Work.doExpensiveRecursive(n: #word): #word {
  if (n == 0) then
    CatCoin.send($block_miner(), 10000) // massive reward upon completion
  else
    Work.doComputationallyExpensiveThing(n) // hard work
    CatCoin.send($block_miner(), 1)         // small reward, not enough to offset hard work
    Work.doExpensiveRecursive(#sub(n, 1))   // interleaves work and reward
}

This requires the miner to do a lot of work before they can claim (the justifiably massive) reward to offset operational costs

People would be incentivized to make the block reward clear, so it's unlikely to see this in practice. However, even if the rewards are 'clear,' there are still many ways to reward miners: any currency implemented on-chain could be used as gas.

Because there are so many different ways to send a reward, it may be hard for transaction submitters to determine how they can send a reward. What if miners expect different currencies, or specify rewards in different ways (e.g. two miners both take CatCoin: miner 1 wants the fee specified up front, but miner 2 wants reward to be put at the end)? Would using a particular reward coin lock you to the subset of miners that use that coin?

Returning to the earlier case:

eval Bob(signature) {
  CatCoin.send(@Alice, 1000)
  Work.doExpensiveThing(1000000)
  CatCoin.send($block_miner(), 10000)
}

Even if a miner evaluates this block, 10000 CatCoin might not be enough to recoup the work spent to reach that point. Asking a miner how much work a transaction will cost, and how much reward they'll get for doing it, is akin to asking the miner to solve the halting problem (or at least a hard variant of it, if bonds aren't Turing Complete.)

Aside: thinking about formalizing mining strategies
In a sense, given a function that evaluates the wealth of a miner `F(World[N])`, where `World[N]` is the world state after `N` transactions, miners are trying to select a set of transactions `T` from the pool of all potential transactions so that `F(World[N+1])` is maximized. `F` could be something simple, like `CatCoin.balance(@Miner)`, or altruistic, like counting the number of transactions that refactor previously-seen lambda forms into explicit names/bonds, or complex, like maximizing the value of all tokens held by `@ Miner` with respect to the best exchange rates between tokens at this point in time, or malicious, like including transactions that squat on namespace or bonds full of garbage data. An optimal strategy for maximizing `F` for a set of black-box transactions `T` may be to evaluate all transactions in parallel (i.e. perform one beta-reduction per transaction, repeat), subtracting the number of reductions completed (times some operational cost with respect to `F`) from the amount of gas rewarded by the bond (as given by `F`). Evaluation continues until either completion, in which case the miner includes the transaction in the next block, or exhaustion, in which case the miner caches the partial result (in case the transaction is included in a block mined by another miner, to reduce time spend on verification), and stops working on it. **This would incentivize transaction submitters to include payment upfront.** The thing about keeping gas separate from the transaction itself is that, if a function is gassed-out, the evaluation can be continued by adding more gas to the function. Time miners spend on evaluating functions, if only partially, will be converted into gas, allowing miners to just work on evaluating functions without caring about whether or not the computation will terminate so they can collect their reward. If I have a function that generates prime numbers forever, if I top it off with 100 gas, I'll get a partial evaluation with as many primes as possible for that price. Adding 100 more gas would get me that many more primes, resuming from where the function left off. If I submit a transaction that doesn't get included because I underestimate the fees necessary, I have to duplicate my transaction instead of just putting some more gas behind it.

Anyway, assume that we agree on a common fee format or something and this is a non-issue. Here's my actual questions:

General Questions

You mention the following, which I think is a pretty cool property to have:

Kindelia doesn't have a built-in consensus algorithm: it is just a pure function that receives a sequence of transactions and computes a final state. As such, it relies on external networks to act as the sequencer.

So miners just have to assemble a totally-ordered list of transactions using some consensus mechanism, which is then folded through Kindelia to produce a global state.

  1. What happens if a transaction is invalid? If we use, say, ethereum as a transaction sequencer, what does the network do if a sequenced transaction claims a name that has already been claimed. Is the transaction skipped? Is this impossible?—If so, how? If a type definition and a bond that uses that type definition are submitted as two transactions, what happens if the first is sequenced before the second? Is that an error? Is ordering enforced? How is ordering enforced?—Waiting for the type to be included in a block before submitting the bond?

  2. What's stopping hard work attacks? Say a malicious miner submits a transaction to factor a large product of two prime numbers. Knowing the products, he then skips the work and includes the transaction and its result in a block. The rest of the network can't verify his solution without factoring the primes themselves (because they still have to evaluate the transaction for any side-effects, they can't just take shortcuts here), which may clog up the network. Prime numbers are just one example, but any problem with one-way computational irreducibility could potentially be used for this attack.

  3. If every node has to evaluate every transaction, aren't we doing a lot of duplicate work? If we have this expensive computation, how can we ensure that it is executed correctly without having to execute the entire transaction itself. If we have 100 computers on the network, wouldn't it be better evaluate 100 transactions in parallel rather that 1 transaction 100 times? This is more of an issue with blockchain construction in general—sharding is an interesting approach, but I'm afraid it may be too complex to include in the Kindelia protocol.

  4. How will new nodes verify the chain from scratch? The global state is determined by evaluating a list of transactions, meaning a new node starting from zero has to evaluate each transaction in order to reach the shared global state. If we assume the network to be maximally efficient, i.e. the time it takes to evaluate transactions between blocks is equal to the time it takes to mine the next block, then the rate at which a node catches up and new blocks are mined is the same. In practice, this shouldn't be much of an issue because there is overhead (and, assuming Moore's Law holds, older transactions will process faster that they originally took to mine), but it's still something to think about.

  5. This is a bit of a dumb question, but: Are the miners who order transactions the same miners that evaluate transactions? If a miner includes a transaction in the block, all other miners have to evaluate it. Say we have a transaction with a lot of work and a high reward: a transaction orderer miner includes it in a block to collect the reward, without evaluating it. Who is responsible for actually ensuring transactions are evaluated?

  6. What are the semantics of the @ syntax?. Is it similar to quote in a language like scheme, converting a name to a literal representation of its tokens? I've seen it used to (1) annotate ownership of a bond, (2) specify bonds as recipients of a transaction for a currency, and (3) check that the hash of a type definition matches an expected value to ensure that it has been defined. (Speaking of: if a name is not defined, what happens when one tries to use it, say to compute a hash? A compile-time type error?)

Assumptions I'd Like to Clarify

This is my mental model of how Kindelia works:

  1. New transactions are ordered by a consensus mechanism, the default being data-only proof-of-work, incentivized by mining fees included in the evaluation of the transaction.
    1. Is the just transaction definition included on-chain, or is the result of the evaluated transaction included as well? If the result is included, is it the entire result, or just the hash of the result? Is the number of beta reductions included?
    2. Must the transaction be evaluated before inclusion (i.e. so the result can be included), or after inclusion (the transactions just need to be ordered, whether evaluated or not)? Or does this not matter?
  2. After transactions have been ordered, transactions are evaluated in order, and can do one of four things:
    1. Declare a new name, like CatCoin.
      1. (Is the size of a name only limited by block size? Is there any form of namespacing to prevent a malicious third party from registering, say CatCoin.sendCoins if CatCoin and CatCoin.send have been registered?)
    2. Declare a new type, which is a non-generic ADT:
      1. (You mention a compact binary format. Is this documented somewhere?)
    3. Declare a new bond, which is essentially a function that can not take higher-order arguments.
    4. Evaluate a script, which may produce bind side-effects.

Is this correct? What am I missing?

Fin

I think that Kindelia is a cool project: to say the least, I wouldn't spend the time writing this all out if I wasn't interested. I'd like this project to succeed, which is why I'm raising these questions for consideration now. If they've already been considered, great! If not, I think there are a number of ways to address these questions, and I'm excited to see how the project evolves from here.

Thanks for taking the time to read this through, have a nice week!

VictorTaelin commented 2 years ago

How can the miner verify that they get a massive reward by including this transaction in a block, without having to do the work before evaluation?

He has to do the work, but the miners will probably just have a much smaller budget ("gas limit") to verify how worthwhile a transaction is. This will force users to push their payment methods to the beginning of the eval{} block, to "keep the miner interested".

Would using a particular reward coin lock you to the subset of miners that use that coin?

Yes, it would. In general, I believe wallets would just default to a specific reward coin and it would essentially operate similarly to Ethereum. I don't think people will be paying fees with swords and NFT arts, for the same reason we don't pay video-games with rice and jewels, it is just convenient to settle on a single medium of exchange. But people could do things different in practice if they wanted to. For example, if in a future that "reward coin" breaks (quantum computers?) people would just migrate to a quantum resistant one, instead of the whole protocol falling apart. In short, I believe the market will naturally sort things out, and this doesn't need to be hardcoded into the protocol. I could be wrong.


What happens if a transaction is invalid?

It is simply ignored. Think of Kindelia as a tx : Buffer -> KindeliaState -> KindeliaState function. That tx does is parse the buffer into a KindeliaTransaction, and then apply it. If that parsing fails, that transaction is simply ignored.

If a type definition and a bond that uses that type definition are submitted as two transactions, what happens if the first is sequenced before the second?

The first fails and the second succeeds, so you must re-submit the first one again. Note that we have mechanisms in mind to allow multiple transactions to be included, but these aren't published yet.

What's stopping hard work attacks?

Nothing. That's a great point, and invalidates some arguments of the paper. I'll have that in mind when refactoring it.

If every node has to evaluate every transaction, aren't we doing a lot of duplicate work?

Yes, that is by design, and won't change. I do have ideas for a sharded blockchain, but the main issue is that, at the logical level, bonds would need to operate as actors, and communication would need to be done via async messages, instead of a simple call. This would greatly increase communication costs, and would remove the communication type safety.

How will new nodes verify the chain from scratch?

The network won't be "maximally efficient". It will probably operate at a constant factor of the maximum efficiency; say, 16x or so; so that a blockchain that operates for 16 years would take 1 year to sync, in an absolutely worst case scenario (which will never be the case for a variety of reasons). That said, we are working on efficient, beta-optimal and massively parallel evaluators for Kindelia's core - check the CaseCrusher repository on this org - so it should be at least 2 order of magnitudes faster than EVM. I estimate Kindelia will be able to perform 10-100x more operations at layer 1 than Ethereum, but only time will tell.

Are the miners who order transactions the same miners that evaluate transactions?

Yes, they are the same. Not sure I get this one question, but if a miner includes a transaction without evaluating it at all, then he/she must pray that it is actually valid, otherwise his/her whole block will get dropped. Right?

What are the semantics of the @ syntax?

This isn't decided yet, but, right now, what is on my mind is it will just hash the name into a 64-bit identifier. So your name is actually an uint64. There are many minor things like these to decide/adjust before the protocol is finished. We're also waiting some CaseCrusher benchmarks to decide if we'll go with a case-tree functional style, or a Haskell-like equational clauses style. This whole repository is just a early draft, expect everything to improve considerably in then upcoming days.

New transactions are ordered by a consensus mechanism, the default being data-only proof-of-work, incentivized by mining fees included in the evaluation of the transaction.

Correct.

Is the just transaction definition included on-chain, or is the result of the evaluated transaction included as well? If the result is included, is it the entire result, or just the hash of the result? Is the number of beta reductions included?

The blockchain only includes a list of transactions. It doesn't include any result, cache, hash, receipt, nor anything like that. Light wallets are impossible, other than compiling the whole thing to some zk-proof scheme.

Is the size of a name only limited by block size?

Yes, but we may impose a limit (should we?). Also, a 10-char limit would allow the name to be represented by its own string, as that would fit 64 bits, removing the need for hashing.

Is there any form of namespacing to prevent a malicious third party from registering, say CatCoin.sendCoins if CatCoin and CatCoin.send have been registered?

Great question. No, there isn't. I don't think that is possible, since the naming system is part of the protocol, and there is no authentication on the protocol itself. So how could that be possibly implemented? I'm aware this does raise an obvious vector for phishing attacks. The only solution I can think of is to be very clear that names aren't namespaced, so users should never assume that CatCoin.something is part of the CatCoin application.

(You mention a compact binary format. Is this documented somewhere?)

Not yet, will be soon. You can look the code at the .kind implementation.


Thanks for sharing all these questions. Interesting to have such a well-written issue at this point in time, given we didn't publish any of this yet. How did you find this project?

slightknack commented 2 years ago

Thank you for the thoughtful response.

In short, I believe the market will naturally sort things out, and this doesn't need to be hardcoded into the protocol.

I generally agree, and think that it may be possible to specify payouts in terms of the miner's preferred currency through some sort of on-chain market system:

 eval Bob(signature) {
     check = CatCoin.withdraw(50)
     exchanged = Market.exchange($block_miner_preferred_coin(), check)
     Market.send($block_miner(), exchanged)
     // rest of evaluation as normal
 }

It would be nice to extract some of these patterns into common patterns; as higher-order functions are not permitted, perhaps a simple compile-time macro system could be used, where expansion is required before inclusion in a transaction? (Of course, when all you have is a hammer, everything is a nail; macros tend to be my hammer.)


Note that we have mechanisms in mind to allow multiple transactions to be included, but these aren't published yet.

I wouldn't want to pressure you to release them before they're done, but I look forward to learning of the design you have in mind. I raised a point about atomic transactions — perhaps it would be a good idea to include a general way to bail out of a transaction, which cancels all changes made to World in an eval upon failure? Another thing could be transaction groups, which are an ordered set of transactions that must be evaluated in order atomically, or no commit to world state is made? Or having transactions include the hash of all transactions they depend upon, which enforces an ordered dag-like structure. (A hash isn't even required—they could just specify the set of block/transaction numbers.)

I do have ideas for a sharded blockchain, but the main issue is that, at the logical level, bonds would need to operate as actors, and communication would need to be done via async messages, instead of a simple call.

I've been working on a small blockchain (ok, really more like an append-only log) to synchronize and order writes among a subgroup of people with write only-access to the chain. The central idea is that each node maintains a log of their own writes, and logs are merged into a single 'main log' in a deterministic manner through the use of CRDTs. Local-first Wasm applications can be run on top of the main log; application developers get distributed synchronization and write conflict resolution 'for free.' Still has a ways to go, though, and a bit unrelated to the topic at hand.

One thing I've been thinking about is how to provide consistency at a global level where everyone has write-level access granularity (to coordinate the distribution of applications). I realized that I needed what essentially amounted to a data-only blockchain, but was struggling to figure out what consensus algorithm to use (I've been experimenting by writing an implementation of the stellar consensus protocol). Kindelia + Ubilog are an interesting development in this area, and I wonder if they may be able to fill the gap I was looking to fill.

if a miner includes a transaction without evaluating it at all, then he/she must pray that it is actually valid, otherwise his/her whole block will get dropped. Right?

You mentioned earlier that invalid sequenced transactions are just dropped once they reach Kindelia. If this is the case, what's stopping nodes from including random transactions (that parse and typecheck, at least) in their block?

it will just hash the name into a 64-bit identifier

Maybe you should add notation for specifying raw identifiers themselves; instead of CatCoin, I could write its identifier, say #A6F2...8F, out in full.

If the hash the bond the name points is used to construct the raw identifier, it'd be possible to reference specific versions of of bonds, even if they're rebound. I'm not sure whether or not is this is a property you want the system to have, but it's worth consideration.

other than compiling the whole thing to some zk-proof scheme.

I mean, zk proofs can be implemented on top of Kindelia for verifying work done off-chain, which would be cool.

Yes, but we may impose a limit (should we?). [...] removing the need for hashing.

I don't think you should impose a limit for names. Just two quick notes:

  1. I think having the 64-bit identifiers be the base source of truth rather than the names themselves would be a good idea. A content-addressed-based scheme for determining this identity (or something similar) has some neat properties.
  2. You don't have to hash the name to create a unique identifier! Because all transactions are ordered, we can just refer to the name by the block/transaction it occurs in. For example, if CatCoin is defined in the 100th transaction (or it is the 100th definition, whichever is easier), it would have the identifier #000...64.

Thoughts?

So how could [namespacing] be possibly implemented?

Here's my proposal:

Anyone can register a name at the root level: CatCoin, Kindelia, my_factorial, etc. Like a bond, when a name is declared, an @Owner may also be specified:

name CatCoin @Owner

type Owner(signature) CatCoin { ... }

Then, for CatCoin to be defined (as either a type or a bond), @Owner has to prove authentication. (So if I create the name CatCoin, nobody else will be able to define a bond for it in the meantime.)

To register a name with a . in it (a namespaced name), you must prove ownership of the previous name in the path. So only @Owner could create and define the name CatCoin.send. These are just rough semantics, but it should illustrate the mechanism:

name Owner(signature) CatCoin.send          // can't be namespaced further
name Owner(signature) CatCoin.Market @Owner // can be namespaced further

Currently, the general concept of 'names imply ownership/permission' only applies for bonds and eval; I think this concept could be extended all 4 basic operations. There are some open questions here (how should this concept be unified?), but I think this is a justifiable proposal for how namespacing could work.


How did you find this project?

I chanced upon Lambolt/CaseCrusher then looked at the org. ¯\(ツ)\


PS—I think I have an argument as to how higher-order lambdas can be included in Kindelia without any changes to the protocol. I'll open an issue detailing the argument soon. Edit: See #3.

steinerkelvin commented 2 years ago

What's stopping hard work attacks? Say a malicious miner submits a transaction to factor a large product of two prime numbers. Knowing the products, he then skips the work and includes the transaction and its result in a block.

Given that the results aren't published at all and blocks would have a deterministic gas limit (i suppose), is that still a problem? Transactions are just "prompts" for the Kindelia network to compute, no results attached.

Let's say i publish something like:

riddle(x: #word, y: #word): ... {
  if factor(182749235683165) == [x, y]:
    pay(...)
}

and then

eval {
  riddle(827346, 9821469)
}

that second transaction would fail because it tries to factor the large number and exceeds the gas limit, thus beeing ignored/invalidated. Am i missing something here?


You don't have to hash the name to create a unique identifier! Because all transactions are ordered, we can just refer to the name by the block/transaction it occurs in.

I liked this a lot. But that would make the reference to a entity depend on where it was sequenced, so we would need to: have some sort of address translation / relative addressing to reference transactions published in groups; wait for transactions to be confirmed to reference them (disturbs UX).

And to compile contracts we would also need to know the position of all entities that it references, instead of just knowing their names and definitions (like libraries).


I like the idea of using the same authentication logic of eval/bonds to namespacing and extending ownership to all entities. Feels pretty elegant.

We could have types that can only be constructed by some bonds, which could act as certificates (as suggested by @rigille ). Or re-deployable bonds (fixed interfaces only), so we could re-deploy authentication bonds (e.g. to replace the signature method) without loosing the "account".

I don't think you should impose a limit for names.

I'm also of that opinion, but I think we wouldn't have names with unlimited size (or arbitrarily nested namespaces for that matter) because we would have to add lists as primitives. What do you think?

slightknack commented 2 years ago

Transactions are just "prompts" for the Kindelia network to compute, no results attached.

How do you ensure consistency between nodes then? In other words, how do nodes determine which "prompts" to evaluate?

that second transaction would fail because it tries to factor the large number and exceeds the gas limit, thus beeing ignored/invalidated. Am i missing something here?

Who sets the gas limit? How is the gas limit defined? If nodes receive reward irrespective of beta-reduction gas, why does the number of beta-reductions remain a useful metric among other metrics.

This transaction is also side-effect free, but what it it randomly sent out CatCoin depending on the result of the transaction? If the network somehow determines a collective gas limit, hard work attacks can still be made up to that limit.

Additionally, although on-chain space is bounded to 40GB/year (as in #3), it doesn't seem like Kindelia state space is bounded in any meaningful way. What's stopping me from writing small transactions that binds 1GB of randomly generated data (using a PRNG), and uses that data in future transactions (to force evaluation)?

But that would make the reference to a entity depend on where it was sequenced

I suggested this construction as an alternative to hashing, but it does have some issues. I don't think it should be used in practice.

I'm also of that opinion, but I think we wouldn't have names with unlimited size

You could require that all names be reduced to their 64-bit identifiers before inclusion in a transaction. I don't see how lists fit in. This.That.This is still just 1 name (the . is nothing special) — it's just that names with .s in them must prove they were created by the owner of the previous namespace.

slightknack commented 2 years ago

I think Kindelia is pretty interesting, but realize that because this project is at an early point in development, there are still a few things that need to be pinned down. I hope I've raised some interesting points that lead to a more robust system, but until the system is pinned down and the paper is released, I'll refrain from raising additional questions. Thanks y'all :)

steinerkelvin commented 2 years ago

Who sets the gas limit? How is the gas limit defined?

It's fixed. Just like on Ethereum. It is no different.

How do you ensure consistency between nodes then? In other words, how do nodes determine which "prompts" to evaluate?

Kindelia nodes will try to evaluate all blocks/transactions up to the gas limit. The ones that exceed the limit will just be dropped. Since the limit is known and enforced by all nodes, the result will be consistent.

I should note that this is not enforced at Ubilog level. Kindelia nodes will just ignore transactions that are not valid, don't parse, don't typecheck etc.

You wrote on #3:

In other words, what's enforcing the 10,000,000 rewrites / block limit? If a node published a block with more than 10 million rewrites, what will happen? Will the block be dropped? Who/what mechanism sets the limit? If it's 'the compute power of the network,' then there is no formal limit. You mention that miners don't run transactions: if this is the case, who ensures any limits are met?

"Pure" Ubilog nodes don't need to evaluate or enforce anything on block content. But to maximize their profits, nodes will have to evaluate/analyse transactions. Kindelia nodes will enforce the limit. So, if miners don't account for blocks content, ensuring they are valid and profitable, the Kindelia network will just ignore those blocks, thus these miners will be wasting the opportunity to collect the rewards.

Users can publish any other kind of data to the chain. But they'll need reward miners in other ways or mine them thenselves.

it's just that names with .s in them must prove they were created by the owner of the previous namespace

You're right. :v

VictorTaelin commented 2 years ago

Adding to what Kelvin said.

How do you ensure consistency between nodes then? In other words, how do nodes determine which "prompts" to evaluate?

If you mean that nodes could be running slightly different clients with slightly different code, and believe they're in sync, but actually not... then you're right: there is no way to make sure you're running the same code as everyone else. This is a good point, and maybe it is a good idea to demand, inside the Kindelia transaction function, the inclusion of the previous block state hash.

If nodes receive reward irrespective of beta-reduction gas, why does the number of beta-reductions remain a useful metric among other metrics.

Beta reduction are no different than multiplications. It is just a primitive, with a computational cost that must be accounted for. The only problem is that, unlike MUL, it is hard to count beta reductions. But it still needs to be counted, like any other primitive operation.

Note that, while there is no computation on Ubilog, there is still a computation limit enforced by Kindelia (say, 10m rewrites/block). This limits how many transactions an Ubilog miner can include in a block, without it being dropped by the Kindelia network. So, for example, one could include two 5m-rewrites transactions, 10x 1m rewrite transactions, etc. As such, there is an incentive for miners to select transactions that maximize their rewards.

it doesn't seem like Kindelia state space is bounded in any meaningful way

It is limited in the same way as Ethereum, by the cost of the STORE primitive. We didn't come up with the final number yet, which is why you didn't see it in the paper. But for the sake of example, if we agree with the 10m rewrites/block limit, then we can charge the equivalent of 10000 rewrites for a 32-bit STORE. That will set a maximum limit of 500 STOREs per second, which means the state of the network will grow, at most, 120GB per year, which is, approximately, the same limit of Ethereum.