pokt-network / pocket-core

Official implementation of the Pocket Network Protocol
http://www.pokt.network
MIT License
209 stars 103 forks source link

[Guardrails] Probabilistic Proofs - Model Claim and Stake Data for Parameter Thresholds #1523

Closed jessicadaugherty closed 1 year ago

jessicadaugherty commented 1 year ago

Objective

Build a data model using claims and stakes to set a ProbabilityOfProofRequest parameter value that proves with 99% confidence that servicers cannot claim more POKT than they risk burning.

This model should set the parameter value for ProofRequiredThreshold as well by using stake data against ProbabilityOfProofRequest which will enable us to both validate the proposed solution and provide the foundations for calculating how much additional scale can be achieved with these changes.

Origin Document

The V0 growth budget has been documented extensively:

These documents and measurements have resulted in a proposed scaling solution called Probabilistic Proofs:

Goals

Deliverable

Non-goals / Non-deliverables

General issue deliverables


Creator: @jessicadaugherty Co-owner: @Olshansk

Olshansk commented 1 year ago

A WIP for this document can be found at #1526. For convenience, here is a direct link to the rendered markdown file.

@RawthiL At your convenience, could you review the document and let me know what you think? Primarily, I'm looking for feedback on the following:

I spent a lot of time thinking about the correct definitions and approach here and really tried to see how/where negative binomial distributions could fit in, as we discussed offline. I put together this blog post while thinking through it but ultimately went with an alternate approach.

cc @jessicadaugherty

RawthiL commented 1 year ago

Great work and nice blog post! I made some comments but it seems to be in the right track, the selected distribution is correct. I think that changing the approach from using the PDF to the CDF you could follow almost the same rationale and get a more accurate result. Instead of aiming to a 1% chance of getting k failures you should change that to aim for "99% of the cases will have less than k failures". I will try to get the values for \mu and \sigma from the transactions data. The definitions of success and failure is only a convention, I like it how you presented since it translates directly into distribution parameters.

I will default to the standard excuse of any statistics guy =P

this distribution (the exponential) is a particular case of the one I suggested (negative binomial)

RawthiL commented 1 year ago

I will give you some info here, if you prefer I can make a repport or repo to share the calculations.

The distribution of requested pokt by claim is a mess (as expected). Here is a sample, of the top 4 chains for clarity:

imagen

I integrated it numerically to obtain the CDF and then I calculated the thresholds. The following table shows the amount of claims that will be below the given POKT value. This should guide the election of the ProofRequirementThreshold:

Percentage Limit
25.0 1.010101
50.0 2.525253
75.0 5.555556
90.0 16.161616
95.0 18.181818

Here are some graphs showing where these thresholds lie in the CDF and PDF of pokt by claim:

imagen imagen


Regarding the number of failed claims (for any reason) in the current network, the percentage is really small. In more than 500k claims only the 0.43% ended up failing. There a minimal risk of being slashed if you do things right.


Finally, regarding the % of the block that's occupied by proofs, we don't have the data, we are a little skeptical that we will be able to get it. We will tell you latter when we have a final thought on this...

Olshansk commented 1 year ago

Great work and nice blog post! I made some comments but it seems to be in the right track, the selected distribution is correct. I think that changing the approach from using the PDF to the CDF you could follow almost the same rationale and get a more accurate result. Instead of aiming to a 1% chance of getting k failures you should change that to aim for "99% of the cases will have less than k failures". I will try to get the values for \mu and \sigma from the transactions data. The definitions of success and failure is only a convention, I like it how you presented since it translates directly into distribution parameters.

I will default to the standard excuse of any statistics guy =P

this distribution (the exponential) is a particular case of the one I suggested (negative binomial)

Thank you! Also, really appreciate the initial direction you provided as well as all the detailed feedback.

I left a follow up thought/comment here: https://github.com/pokt-network/pocket-core/pull/1526#discussion_r1116476754

Olshansk commented 1 year ago

@RawthiL Following up on the distribution data you provided. That was exactly what I was looking for and appreciate the clarity. If you push it to a gist, I can append it to our spec?

Side note: interesting how the claim distribution is bimodal to some degree.

Here are the final values using that outcome:

Screenshot 2023-02-23 at 8 23 13 PM
RawthiL commented 1 year ago

If you push it to a gist, I can append it to our spec?

No idea how that works, but it would require the data to be somewhere. (gist cannot hold data files I think)


The ProofRequirementThreshold should be updated regularly, as it can vary for many reasons:


Careful, k = 16 does not represents 99.99 %. In probability when reaching limits, the numbers grow really fast: 99% -> k=16 99.9% -> k=24 99.99% -> k=32 This is why security with 99.99% (like atomic applications) is so expensive...


Side note: interesting how the claim distribution is bimodal to some degree.

Fun answer, to me is like the CS-137 spectrum This "binomial" is the effect of the CP paying bad nodes for measuring them. Before GeoMesh the network-wide distribution was almost an exponential, the second lobe of higher rewards was completely diluted by the low rewards. This was one of the reasons why I think PIP-22 was not (completely) fit in that time.

Olshansk commented 1 year ago

No idea how that works, but it would require the data to be somewhere. (gist cannot hold data files I think)

Maybe a google drive?


Careful, k = 16 does not represents 99.99 %. In probability when reaching limits, the numbers grow really fast:

Thanks for calling that out. It was a typo on my end. I intended 2 nines confidence, not 2 decimal nines 😅


Fun answer, to me is like the CS-137 spectrum

TIL! Gives some context to the Chornobyl accident and how quickly things degraded.

This "binomial" is the effect of the CP paying bad nodes for measuring them.

Interesting observation.

Before GeoMesh the network-wide distribution was almost an exponential, the second lobe of higher rewards was completely diluted by the low rewards.

Good context. I'll keep this in mind when reading this.


Re ProofRequirementThreshold - I'd like to pick out values that are sufficient given a one-time analysis and only need to be reviewed every couple of months or when very large shifts in the network happen.

I read all of the points you provided but would like to emphasize that this is a feature with a mid (until V1 launches) lifespan so we're aiming for "good enough with low maintenance" rather than "great".

The cost of not implementing it (assuming the network grows) is to hinder the network's capacity at ~3B. The tradeoff of a good enough solution and limiting the network's growth is worth it, but we don't want too invest too much of the team's time.

For example, the following would trigger a review of the parameter:

If none of these come into play, I believe penalty/reward/scaling tradeoff is worthwhile given the numbers we have selected. Does that approach make sense?

RawthiL commented 1 year ago

Here is the Gist with all the data Using the available info and data from before block 86500 (after that there was a change in the number of Apps in the network), I obtained that the bock TXs portion is composed of:


Regarding the review of the parameters, its OK the low maintenance mode, we can follow the guidelines you provided.

RawthiL commented 1 year ago

Just noticed the author names, my name is:

Ramiro Rodríguez Colmeiro

(double last name, common in the southern lands)

Olshansk commented 1 year ago

(double last name, common in the southern lands)

@RawthiL I've updated your name in the document :)

Regarding the review of the parameters, its OK the low maintenance mode, we can follow the guidelines you provided.

Awesome, we're on the same page 👌

Proofs : 67.06 % Claims : 21.64 % This is good I think...

Yes, this is perfect. It vindicates the approach with data.


I'll be at ETH Denver, so if I don't get to it this week, I'll put the finishing touches (include you're data, improve the flow, etc...) on the document next week.

Also, I appreciate all your help with this. The data, the feedback, the tips, etc. I learnt a ton going through this exercise and looking forward to many more similar ones in v1. 🙏 💯

nodiesBlade commented 1 year ago

Hi everyone,

If the goal is to fit more claims and proofs into a block, wouldn't it be more simplistic to investigate and experiment with increasing the max block size as a stop-gap solution to scale out V0?

jessicadaugherty commented 1 year ago

Hey @PoktBlade - thanks for chiming in. I'd love to see that data as well before committing to this implementation. Details about the block size experiment can be found here: https://github.com/pokt-network/pocket-core/issues/1503

nodiesBlade commented 1 year ago

Is the idea to gather details about both implementations (block size vs probabilistic proofs) and then make a decision on how to scale v0? Just top of mind, researching block size appears to be the lowest friction route. I'll move my convo over there.

jessicadaugherty commented 1 year ago

That was the plan, exactly @PoktBlade! A goal was to automate some of these measurements too so we could keep growth budget data up to date. Although I am not sure how important of a goal that is anymore as I don't know how much of that is applicable to V1 scalability.