Closed yondonfu closed 3 years ago
Hey @yondonfu - the committee has reviewed the proposal and we're excited to fund this incentive program and get it started on 12/14. We feel 21,000 is an appropriate incentive for this initial experiment. We just had a couple questions before getting started:
How did you arrive at three weeks? Do you think this gives the program enough time to succeed and enough time for orchestrators to improve their setups? Given this is the first experiment, bounding it to a shorter period of time of three weeks is understandable, though we do think a longer term program will be helpful. Have you thought about adding 1, 2, 3 or more weeks to the program length or do you prefer to keep it shorter for the initial experiment to first see how it goes?
What are your thoughts about what's to come after the 3 weeks? Given it's a relatively short time period there's the chance the program creates an incentive to create a temporary high quality setup, collect rewards and then scale back at the end of the program. Is this a concern?
@adamsoffer
Regarding question 1:
Arrived at 3 weeks because somewhere around a month felt like the minimum amount of time that gives existing or potential node operators an opportunity to become aware of the program and start to try to improve their scores. And 3 weeks is proposed instead of a full month because there was a desire to keep the weekly LPT budget on the higher side while also being budget conscious for the overall program so reducing the length by a week was a way to maintain the target weekly LPT budget while reducing the budget for the overall program. A less important reason was that 3 weeks also happens to align the end of the program with the new year.
While I think it will be possible for some existing node operators to improve their setups, I suspect the largest potential performance gains would come from new node operators that already have existing access to the resources that would help them succeed (i.e. hardware + bandwidth). I agree that a longer term program would be helpful since the most important thing for attracting new node operators that have access to those resources is program awareness. While the shorter length of this initial program may turn out to not be enough time for awareness to spread a whole lot, it might still be useful to also consider it as a precedence campaign that makes spreading awareness of follow on programs easier. I think it might make sense to run next iterations of the program that could be longer, but the impact of a shorter program will be a useful input to consider before making that decision.
Regarding question 2:
After the 3 weeks, I think the following would be good next steps:
I do think its possible for some participants to create a temporary high quality setup, collect rewards and then scale back. If that happens it would still be the case that for the duration of the program the value of the rewards prompted certain participants to at the very least acquire some resources which improves the performance of the overall network even if temporary and that would still be a useful learning because then the value required to trigger that observed behavior could be an input into the design process for incentivizing sustained high quality setups.
Makes sense. Thanks for the explanation and context. And next steps sound good. Let’s get it started :)
Hey @adamsoffer @yondonfu
Metrics To Track
The % of LPT rewards that are then staked vs. not staked one week after they are distributed
On a weekly basis, the number of orchestrators that met the threshold total score
On a weekly basis, the number of newly registered orchestrators
On a weekly basis, the total scores of the top 5 orchestrators in each region
Is there a way Dappquery can help in the grant in tracking above metrics? I created this Livepeer dashboard last time but couldn't take forward: https://dappquery.com/dapp/livepeer-10011 It has nice weekly and monthly charts. My proposal is to create orchestrators dedicated dashboard with above metrics.
Hey @napolean0 - I don't think dashboard for these metrics is necessary for this since it was such a short campaign and the top scores each week have already been collected but thanks for the suggestion!
Thanks @adamsoffer Not only dashboard even a simple chart or SQL query can be written for tracking orchestrator metrics. Livepeer dataset: https://analytics.dappquery.com/browse/34/schema/livepeer Livepeer queries: https://analytics.dappquery.com/collection/21
Recap slides and metrics shared on the most recent community call.
Give a 3 sentence description about this proposal.
An experimental performance incentive program that distributes LPT rewards to orchestrators that perform well on the leaderboard.
Describe the problem you are solving.
In short, how can the community incentivize orchestrator performance improvements that result in a reliable, scalable and geographically distributed network?
In order for the Livepeer network to serve video app developers, the network needs to be:
The reliability, scalability and geographic distribution of the network depends on the reliability, scalability and geographic distribution of individual orchestrators that make up the network. While the existing in-protocol inflationary LPT reward mechanism has helped attract a number of orchestrators to the network already, the performance of these orchestrators today does not achieve the aforementioned desired properties.
A reasonable goal for live streams is real-time transcoding (i.e. the time for transcoded results to be returned is less than or equal to the length of the input video) for ≥ 99% of requests. At the moment, the orchestrator leaderboard indicates that only one orchestrator comes close to meeting this requirement in North America and no orchestrators come close to meeting this requirement in Europe and Asia. Thus, the existing in-protocol inflationary LPT reward mechanism alone is clearly insufficient to incentive orchestrator performance improvements that achieve the desired properties of reliability, scalability and geographic distribution. While network fees do provide an incentive for orchestrators to perform well, at this stage, they are also likely insufficient on their own because a) the value of fees is eclipsed by the value of LPT rewards and b) in order for the value of fees to surpass the value of LPT rewards the network needs to be performant enough to attract more fees.
Describe the solution you are proposing.
An experimental performance incentive program based on measured performance on the orchestrator leaderboard which is funded by an LPT grant.
The requested grant amount is 7000 LPT each week for 3 weeks. Up to 7000 LPT will be distributed to top performing orchestrators each week. Any LPT that is not distributed will be returned at the end of the 3 weeks.
The goal of this program is to kickstart experimentation with performance incentive programs that complement the existing in-protocol inflationary LPT reward mechanism and to generate empirical data that can be used by the community to design additional incentive programs for the network.
Describe the scope of the project including a rough timeline and milestones.
Timeline
Rules
Metrics To Track
Please estimate hours spent on project based on the above.
The program will last for 3 weeks and an additional week after its conclusion will be used to finish compiling metrics for the program.