rand-chain / rust-randchain

(WIP) Official implementation of the RandChain protocol
GNU General Public License v3.0
1 stars 0 forks source link

chain: remove `time` in `BlockHeader` #63

Closed SebastianElvis closed 3 years ago

SebastianElvis commented 3 years ago

Should we remove time and bits from BlockHeader?

HAOYUatHZ commented 3 years ago

Should we remove time

time will be used in difficulty addjustment

time can be chosen by nodes, giving nodes opportunity to parallelise mining

You are right, It's my mistake. A better way is to exclude time from "h_g" and "h_state"

But we need to think of a way: how do we decide on 2 blocks with but only with different time

SebastianElvis commented 3 years ago

Should we remove time

time will be used in difficulty addjustment

time can be chosen by nodes, giving nodes opportunity to parallelise mining

You are right, It's my mistake. A better way is to exclude time from "h_g" and "h_state"

For difficulty adjustment, we can use iterations. Given the limited processor frequency, we can ensure that for x iterations, it takes at least kx seconds. With more nodes, iterations is likely to be smaller. Then, we can increase the difficulty.

But we need to think of a way: how do we decide on 2 blocks with but only with different time

HAOYUatHZ commented 3 years ago

Given the limited processor frequency, we can ensure that for x iterations, it takes at least kx seconds.

To me with hardware evolution it's a bit idealistic. We cannot forsee how long it takes to compute an iteration in the future. It's fine for an experiment but this assumption is not true for a robust project. Think about it this way: if the hash power won't really evolve that much, then do we really need diifficulty adjustment?

I think maybe fork rust-ranchain to ranchain-demo, so that we can skip the difficulty adjustment? In an experiment we can assume hardware computing power always remain the same, and hence there's no need for difficulty adjustment.

With more nodes, iterations is likely to be smaller.

I am not sure.

If 1 key 1 cpu (node), then all nodes should mine out with similar iteratons, (because hashing is uniformly distributed)`

I agree with you that using iterations can help, but the logic is more like higher-end hardware -> smaller iterations, instead of related to the number of nodes.

So my conclusion is: we can remove bits and use iterations, and we'd better check on average/median iterations of recent 2 hours.

SebastianElvis commented 3 years ago

Given the limited processor frequency, we can ensure that for x iterations, it takes at least kx seconds.

To me with hardware evolution it's a bit idealistic. We cannot forsee how long it takes to compute an iteration in the future. It's fine for an experiment but this assumption is not true for a robust project. Think about it this way: if the hash power won't really evolve that much, then do we really need diifficulty adjustment?

We can. For iteratively sequential functions, the execution time is bound by the processor's frequency. The world's fastest processor (~10GHz) can only be <5 times faster than processors (2-3GHz) in laptops. This is also evidenced by the VDF contest. See the analysis in the RandChain paper.

I think maybe fork rust-ranchain to ranchain-demo, so that we can skip the difficulty adjustment? In an experiment we can assume hardware computing power always remain the same, and hence there's no need for difficulty adjustment.

Is it difficult to add/remove difficulty adjustment? Intuitively it should be okay to keep them in a single project but two different branches.

With more nodes, iterations is likely to be smaller.

I am not sure.

If 1 key 1 cpu (node), then all nodes should mine out with similar iteratons, (because hashing is uniformly distributed)`

I agree with you that using iterations can help, but the logic is more like higher-end hardware -> smaller iterations, instead of related to the number of nodes.

So my conclusion is: we can remove bits and use iterations, and we'd better check on average/median iterations of recent 2 hours.

I'd say so. In fact Bitcoin also requires every node is mined within 2 hours (by using timestamp).

HAOYUatHZ commented 3 years ago

We can. For iteratively sequential functions, the execution time is bound by the processor's frequency. The world's fastest processor (~10GHz) can only be <5 times faster than processors (2-3GHz) in laptops. This is also evidenced by the VDF contest. See the analysis in the RandChain paper.

As what I said, this is not always true in the future.

SebastianElvis commented 3 years ago

We can. For iteratively sequential functions, the execution time is bound by the processor's frequency. The world's fastest processor (~10GHz) can only be <5 times faster than processors (2-3GHz) in laptops. This is also evidenced by the VDF contest. See the analysis in the RandChain paper.

As what I said, this is not always true in the future.

It is true at least for 50 years. 10GHz is the limit of processors' frequency. Relationship between frequency and voltage is: frequency = f(voltage^3). That is, to double the frequency, one should increase voltage for 8 times, which will burn the processor. This is the fundamental limit that all ASICs cannot break, and the reason why parallel computing exists and stays as a hot research topic.

HAOYUatHZ commented 3 years ago

Is it difficult to add/remove difficulty adjustment? Intuitively it should be okay to keep them in a single project but two different branches.

Using a seperate branch for experiments is a good idea.

So we are going to remove "time", but keep "bits" for difficulty validation, and use "iterations" for difficulty adjustment.

All good?

SebastianElvis commented 3 years ago

Yeah this would be good.