livepeer / grants

⚠️ DEPRECATED ⚠️ Please visit the new homepage at https://livepeer.notion.site/Livepeer-Grants-Program-f91071b5030d4c31ad4dd08e7c026526
44 stars 7 forks source link

Livepeer business case development grant application #10

Closed cfl0ws closed 3 years ago

cfl0ws commented 4 years ago

Give a 3 sentence description about this proposal.

Chris from Chainflow and Gleb from StakingRewards have recently open-sourced a Livepeer Orchestrator and Network Model (link). This grant proposal is directed to receiving support for extending the model in terms of depth, interactivity and outcome, based on some add-on features listed in the chapters below.

This model can be used by various demand- and supply-side actors to make data-driven decisions around e.g. pixel pricing and demand forecasting. The goal is not only to improve but also to maintain the model over time which can be reflected in the grant's structure (see Scope).

Describe the problem you are solving.

While building up a business case for an Orchestrator we came to realize that there is a lack of resources and models which can be used to make data-driven decisions when participating in Livepeer today. These would require a combination of demand-side estimations/observations (demand for transcoding, pixel price competitiveness compared to legacy actors, etc.) and supply-side parameters (network-related data, pricing, breakeven pixel cost given a particular Orchestrator set-up, etc.).

Given the network's relatively young age and the speed the space as a whole is moving as well as additional complexity of extremely multi-variate pixel pricing, creating such a model has posed a significant challenge and requires multiple iterations and assumption validation rounds with industry experts.

Having additional significant improvement to the flexible, adjustable and scenario-based model like that will save time to many ecosystem participants as well as allow them to come up with own strategies, be it staking, transcoding, hodling, etc.

Describe the solution you are proposing.

The solution we are proposing is to build upon the first model and get it to a certain level of maturity in terms of breadth of scenarios and benchmarks, depth and validation of certain assumptions, as well as improve the data and outcome interpretation side by implementing a set of features (see Scope).

This model can be used by a variety of Livepeer ecosystem participants to drive their data-driven decision-making:

1) Demand-side actors, like video streamers and potential partners to estimate and compare costs/prices between Livepeer and various centralized alternatives.

2) Supply-side actors, either existing or future Orchestrators to calculate the business case of operating one and figuring out the fees to charge as well as existing miners considering dedicating some rigs to Livepeer. Or the token holders and/or investors to calculate their expected LPT+ETH rewards.

3) Ecosystem in general to model the effects of certain updates on the network, evaluate the impact of governance decisions, etc.

Describe the scope of the project including a rough timeline and milestones

Here are some feature and improvement ideas which will become deliverables were we to receive the grant (further scoping with the community is desirable):

Feature shortlisting to 3-6 features and their priorization is expected to be made/proposed with a help of e.g. community voting/poll on most desired features. These features will be then taken into account for scoping the following feature sets:

*Limited set of features (3 man-days / 24 hours)** - deliverable within 2-4 weeks of grant allocation and/or feature vote close

Comprehensive set of features (3-5 man-days / 24-40 hours)** - deliverable within 4-6 weeks of grant allocation and/or feature vote close

*limited set of features would mean 1-3 added features from the feature list based on 3 man-day effort estimations of scope. Limited scope also means an incremental improvement in model capabilities and outcome detail.

**comprehensive set of features would mean 2-3 further features from the feature list based on further 3-5 man-day effort. Full scope also means a significant improvement in model capabilities and outcome detail.

Please estimate hours spent on project based on the above

The estimated hours can be seen in the chapter above. TLDR: Limited set of features (3 man-days / 24 hours); Comprehensive set of features (3-5 man-days / 24-40 hours)

Effort estimations are kept smaller intentionally, given the fact that the size of the grant is limited. We would however expect that depending on the feature sets, the overall effort can exceed the estimate, in which case the addressing the surplus may be addressed via subsequent grant applications.

adamsoffer commented 4 years ago

Thanks for the proposal, Chris. We’re excited about this! We think the financial model you and Gleb shared with the community is hugely valuable for both demand and supply-side actors and the ecosystem as a whole, and we'd love to see the added features and improvements you’re proposing. We’d like to provide a grant to help you execute the "comprehensive" vision for this model you described.

A couple of questions:

cfl0ws commented 4 years ago

@adamsoffer that's fantastic news!

Regarding your questions -

Is the idea to continue iterating on the spreadsheet? We think this would be a good tool to make accessible via the web — easier to share, discover, and measure usage/success. Of course, if this is out of scope for the 40 hours that can be made into a separate grant.

Yes, that's the idea for this proposal. Once we finish this phase and agree on what the updated spreadsheet looks like, we can create a web version of it, under a separate grant.

Is there a metric you’d like to use for measuring the success of this proposal?

I'll discuss this with Gleb and get back to you soon 👍

glebdudka commented 4 years ago

@adamsoffer @chris-remus

The scope of the grant will be a sum of the value-added features. The main metric to measure the success of these would be that the features are 1) implemented, are 2) functional, are 3) delivered within the specified timeframe (with potential 1-2 week deadline buffer for force majeures) and 4) sufficient degree of automation or number of pre-defined scenarios are included to minimize manual input necessary.

If those are not granular enough, we would have to look into each feature the community ends up selecting to be covered by the model. (e.g. "cost or pixel price benchmarks based on geographies" - the scope could be to have at least 5 major geographies like e.g. US/Canada, Eastern and Western Europe, Asia and South-East Asia etc.).

nelsorya commented 4 years ago

Hey @chris-remus & @glebdudka, ok great. We see this as potentially being a great educational tool for the orchestrators and broadcaster community. As such we think linking the measurement of success to community engagement would be a good metric. For example, we would suggest 10+ parties engaging in the discussion as a success. This could be through comments/feedback in the spreadsheet, github, discord, we would leave it fairly open to you to determine how to best drive engagement.

cfl0ws commented 4 years ago

@nelsorya To clarify, would this be 10 different parties engaging or a total of 10 engagements, e.g. 5 conversations with at least 2 responses?

In addition to your list of engagement forums, I'd add forum.livepeer.org as well.

Also, would you consider tiers, e.g. 50% of the grant for 5 conversations, 100% for 10 engagements and maybe a 25% bonus for over 20 engagements?

glebdudka commented 4 years ago

@nelsorya i do get where one is going with this, however often badly executed task or when result/methodology are unclear drive way more engagement, thus if i e.g. write a script which is not really self-explanatory on purpose, i "hack" the engagement metric as many will ask questions.

I do see the benefit of engagement as a means to measure community actually interacting with and using the model, however, i am more of a proponent of investing a bit more time in very clearly defining the scope in the beginning, with success being that the said scope is delivered on time.

nelsorya commented 4 years ago

@chris-remus we were thinking 10 different parties engaging. We would be flexible as to which platforms this happens across, we just want to be able to measure community engagement with the model.

Yes we were thinking we could tier the milestones based on the number of engagements.

nelsorya commented 4 years ago

@glebdudka I get where you are coming from, we would be open to setting the milestone around another metric like say the quality and measuring that through surveys with orchestrators or broadcasters for example.

Any metric we set is potentially vulnerable to gaming, even optimising for speed creates its own misalignment of incentives. By optimising for engagement it ideally increases the impact of the model for the community.

cfl0ws commented 4 years ago

@nelsorya Could we schedule a brief call to finalize the details?

nelsorya commented 4 years ago

Hey Chris, yeah sure I will coordinate with you on discord to find a time for next week.

cfl0ws commented 4 years ago

@nelsorya & @adamsoffer do you have an ETA on the suggested metrics? We're ready to get rolling on this!

nelsorya commented 4 years ago

Hey @chris-remus, sorry for the delay, here are the metrics and milestones we propose.

  1. Scope - $500
  2. Spreadsheet phase 1- $1500
  3. Spreadsheet phase 2 - $1000 (incorporating any community feedback)
  4. Community call & AMA- $500
  5. Ongoing Maintenance 6-months - $500

Success Linked Metrics: Attendees on the call - 5,10,20 people ($250, $500, $750) Recording views - 15, 30, 50 views ($250, $500, $750)

(All amounts would be paid out in LPT at the completion of each milestone. )

Let me know if have any thoughts or questions on this.

cfl0ws commented 4 years ago

@nelsorya 1-4 look good. Let's leave 5 open, as it's probably part of a longer discussion. For example, we'd need to clearly define what "maintenance" is.

Rather than delay kicking this off any further, I'd suggest we proceed with 1-4, with an agreement that we'll maintain it under mutually agreeable terms.

How does that sound?

adamsoffer commented 4 years ago

@chris-remus Sounds good. Excited to get this rolling. Can you share target dates for milestones 1 and 2?

nelsorya commented 4 years ago

Ok great @chris-remus, that sounds good. I'd just add on the recording views for the second success metric, we were thinking that would be measured one month after the recording was posted

glebdudka commented 4 years ago

Hey @nelsorya could you please specify what does position 1 "Scope" stand for? The way I read it, this is about scoping the exact feature set 1 and 2 based on e.g. community voting and gathering potential ideas/proposals for the model from the community. Did I get this right? As soon you confirm or clarify, we could commit to a timeline.

nelsorya commented 4 years ago

Hey @glebdudka, that's right. We were thinking Step 1 would be setting the scope with the exact features you intend to include in the product and any incorporating any community features or ideas that seem achievable would be great as well.

cfl0ws commented 4 years ago

Thanks for the clarification @nelsorya. @glebdudka and I will get back to you on milestones today.

cfl0ws commented 4 years ago

@nelsorya here are our suggested milestones, assuming we get approval to start by Friday, May 8 -

nelsorya commented 4 years ago

Hi @chris-remus, that timeline sounds good, lets run with that.

cfl0ws commented 4 years ago

Fantastic @nelsorya, we'll get started!

glebdudka commented 4 years ago

Hi @nelsorya, hi @dob,

see below the set of features for community voting/prioritization of the Scope phase. Let us know what would be the best/easiest way to let people vote on features. We could either share this post for comments, and/or post separately on discord and/or on twitter (e.g. we post and you retweet, etc.). We would suggest all 3, just need your go.

Dear Livepeer community,

As mentioned in grant application (above), we would like to further improve on the Livepeer Orchestrator business model which we open-sourced. In order to make the model as useful to the community as possible, we would like to ask you to vote for the features you think are the most useful to you and also suggest the featured which we have not thought about. We would be glad to consider those as well!

As discussed in grant application, the model will be further developed in two phases. The total scope of Phase I is 5pt and Phase II – 7pt. Below you can find the list of features and their relative effort estimates.

General Model usability improvements:

Demand Side and Network Assumptions:

Orchestrator Cost modelling:

Other features:

Thanks for your participation!

Best, Gleb & Chris

nelsorya commented 4 years ago

Hey @chris-remus @glebdudka, thanks for this. This list looks good to me. Could you share this in the orchestrator channel in discord? I think that will be the best place to discuss these features with the community

cfl0ws commented 4 years ago

@nelsorya Yes, we'll post to the forum and share in the orchestrator channel by Monday.