pegnet / pegnet

https://pegnet.org
MIT License
41 stars 30 forks source link

We should limit our miner submissions if we have no chance of winning #207

Closed Emyrk closed 5 years ago

Emyrk commented 5 years ago

When we go live, we want to keep the number of OPRs submitted somewhat bounded. I think an easy solution to this is to say if our difficulty is not greater than the lowest difficulty of the X worst record, then do not bother submitting.

I was going to put X=300, and we can tighten the bounds as we see the difficulty rates unfold.

Emyrk commented 5 years ago

I'm trying to think of the best way to implement this, and am curious how to handle the edge cases. My thoughts:

If 200+ records exist in the previous block, sort the OPRs by difficulty, take the last OPR with a max index of 300. So if 301 exist, we take the 300th as our difficulty floor. If 205 exist we take the 205th.

My rationale is if everyone puts their floor at 300, some blocks will potentially have less then 300, and our floor would be 0. Making the next block huge.

My implementation allows for the blocks to be less than 300, and still get a good difficulty floor, but if everyone does this, then the floor will gradually rise up. As no one will submit below the floor they all calculate. My hunch is to do some math on the floor to take 90% of the difficulty or something, so the floor remains somewhat constant block to block, but even that might creep up, as I don't think difficulties are going to be linear.

Maybe I should look at other crypto's difficulty retargeting and do some math based on that? Figured I'd open up a discussion before I go into a rabbit hole.

The basic idea is how do we not submit records when we know we have no chance of winning. And consequently, somewhat limit the number of entries pegnet will write every block.

Emyrk commented 5 years ago

The basic idea is how do we not submit records when we know we have no chance of winning. <- this is to save money for miners

PaulBernier commented 5 years ago

I am thinking that the core miner implementation shouldn't be too opinionated in its strategies, and remain general and open. This strategy is reasonably "common sense" I would say so I am fine with it being implemented, but I would leave the threshold open to be configured by the miner. Let's not impose a strict strategy, let the miners do the math and figure out what is good them!

AroundTheBox commented 5 years ago

I think the mining software needs to make decisions based on what's profitable for the user. If it doesn't, someone will modify the software so it does.

In this case the top 50 get a chance to win a portion of 5000 PNT, so assuming everyone is pulling from roughly the same dataset and there's no grading advantage your expected payout from making the top 50 is 5000 / 50 = 100 PNT. The cost to submit an OPR is $0.001.

It's therefore profitable to submit an OPR when the probability of reaching the top 50 is higher than 0.001 / (PNT Price * 100).

The probability is very difficult to calculate because it depends on the variance of the cutoff PoW score. The easiest way to estimate this would be to look at the last X records and see what percent of the time your PoW score would have cracked the top 50, but in reality this will be difficult since there won't be enough blocks with the probabilities we're looking at.

An interesting implication of this is that the higher the PNT price the higher the number of records will be submitted. I have no context for even the order of magnitude that the PNT price will be, but if it was $1, for example, it's in your interest to submit an OPR as long as your odds of reaching the top 50 are better than 1 in 100,000.

Basically it doesn't take a big prize for it to be worth it for you to gamble $0.001.

Emyrk commented 5 years ago

If the Network's hash power is relatively consistent block to block, I'd think we could pretty easily determine if we'd crack the top 50.

My issue is if I make the cutoff 300, and all miner's use the same defaults. All miners share the same minimum difficulty. That means the next block is guaranteed to all oprs (if all miners the same) above the difficulty X. This pushes the number of records in the block to less than 300, and once again I raise the minimum. Repeat.

If we have 5000 oprs about to be submitted, that could also negativity impact the network. So this design keeps the number contrainstrained. But if we do that 300 cutoff, the number of records will gradually decrease until I hit the bottom of the range, 200. Then the minimum difficulty is 0, as I don't have the minimum number of records to decide. And boom the next block has 5K. Then it's back to 300.

I'm not sure if taking 90% of the last is enough either, as if the discovery rate is non linear, than differences in difficulty could be more than 10%? Than it's the same problem as above.

I need to figure out a way to calculate the floor, without making the floor rise block to block

mberry commented 5 years ago

Think the idea is sound, but this is really hard to implement and could lead to problems (aka angry complaints) Were a few powerful miners to turn off: all these dropped entries suddenly have a chance and many clients will simply not submit their winning opr.

mberry commented 5 years ago

In comparison to mining and data acquisition, submitting a suboptimal number entries costs cents more each day. I'd be more worried about my power costs even with the relatively low energy consumption of lxrhash.

AroundTheBox commented 5 years ago

Emryk,

I think I understand a little better what you are looking for. I'm curious what you'd find with other blockchains because virtually all of them have a fixed difficulty with variable block lengths (for each particular block), while PegNet will use fixed block lengths and variable difficulty.

I think for the reasons you brought up, if you want to use the 300th ranked OPR you need the threshold to be (X OPR) * (some multiple), where X is the min of 300 and the highest ranked OPR submitted. It makes sense what you're saying that the multiple needs to be there to avoid drift, since you always want the number of submissions to be larger than the breakpoint.

I think the cleanest solution is to use the 50th OPR's difficulty (which hopefully will always exist) times a multiple. Of course the hard question is what multiple to use.

AroundTheBox commented 5 years ago

I could be wrong but one interesting thought is that as the system grows and gets more overall hashpower, we should expect the number of OPRs submitted to drop. With only a few miners there could be high variance in the 50th block, so it would be worth it to submit if their difficulty was within a very wide margin. If there are many miners and the 50th PoW score is in a very narrow range, then miners would only submit OPR's if their scores were very close.

Emyrk commented 5 years ago

I agree with more miners, it should be more consistent. I don't have the data on how many miners we'd need for how narrow.

I think you are onto something with the fact that we can take the 50th times some multiple, and using the variance in the top 50. I'm just thinking out loud here, but could we measure the variance of the top 50, and use that to determine some multiple? If more records exist, we could use the 200th record or w/e as a minimum, and use the variance calculation if we have less than 200.

As long as that variance calculation is "safe", and leaves a lot of wiggle room, that would solve my problem where the minimum difficulty will drift up until I no loner have enough to do a simple hard cutoff.

AroundTheBox commented 5 years ago

Sounds good. One quick note (which I think you're saying but want to make sure of the distinction), we're interested in the variance of the PoW for the 50th OPR record, which should be smaller than the variance for the top 50 OPRs.

b2b-comm commented 5 years ago

How about taking block size limit, say, 2 MB?

Emyrk commented 5 years ago

I ended up with this https://github.com/pegnet/pegnet/blob/PNT-207_min_submit/utilities/simulate/DifficultyTarget.md

WhoSoup commented 5 years ago

How about taking block size limit, say, 2 MB?

could you elaborate on this a little more? how would this work?

Emyrk commented 5 years ago

I implemented my comment above