Closed Saumay-Agrawal closed 3 years ago
Thanks for this great proposal. Here are some questions and comments:
For phase 2, data collection: Do you have comfort and experience in programming in go? Since you need to communicate with nodes on the network in order to get pricing information, it might be easiest to fork the existing go-livepeer codebase and update it to perform these requests and write the info to your DB.
The grant proposes a payout after phase 2, but it doesn't propose a usable output for the community. (Unless you're somehow making the raw DB queryable?). Perhaps the first payout milestone should come upon delivery of something useful for the community?
The output of a nice, usable, hosted tool for the community is nice, but I worry about the incentives for long term maintenance. As you mention in the proposal, you indicate you'll iterate for 3 months and keep it running for a year. This is a great start, but I think grants are best suited towards efforts that are being run as part of a sustainable, self motivated effort, rather than just funding ongoing development and maintenance. To that end, perhaps think about alternate outputs or milestones that don't require any ongoing maintenance or hosting. For example, could there be a single open source tool that anyone could download and run, that would start up their own "price insight daemon", that would connect to the network and continue to pull data, populating their own local DB and dashboard to view current pricing info? If this were open source, and standalone, then anyone could use it, contribute to it, etc. It could be the backend that powers your own custom UI that you propose as well...but there'd be less risk of all the work become useless if you cease to host the web app.
In general it feels like there are a lot of unknowns in terms of feature requests and implementation details still. This is totally fine! Part of the process of building this will be figuring those things out. But it might make sense to focus purely on a single shorter term milestone and grant, and then upon achieving it, proposing the next phase, rather than trying to estimate required resources/payouts/etc for all the phases before its even clear how you'll go about getting the initial data into the system.
Hope that's helpful input to you and the grant committee.
Hi Doug! Apologies for the late reply, I took some time to deep dive into the codebase in addressing your concerns.
For phase 2, data collection: Do you have comfort and experience in programming in go?
I have substantial experience in backend-dev, but safe to say I am currently learning Go. My team has some members that are well versed with the language, so the learning curve will not be a problem.
Since you need to communicate with nodes on the network in order to get pricing information, it might be easiest to fork the existing go-livepeer codebase and update it to perform these requests and write the info to your DB.
GetOrchestratorInfo
Combining realizations from point 1 and 2 + your insights on “pricing insight daemon”, I came to the conclusion that I can isolate functionalities in the following 3 atomic components:
Component 1: Querying pricing data.
Component 2: Price Insight daemon
Component 3: User Interface for Pricing Tools dashboard
The grant proposes a payout after phase 2, but it doesn't propose a usable output for the community. (Unless you're somehow making the raw DB queryable?). Perhaps the first payout milestone should come upon delivery of something useful for the community?
This was a miscommunication on my part. The intent was to address the problem of raw data. Hence, the proposed database of aggregated data was meant to be queryable, so that anyone can use this service for their own purposes.
Based on the above reworked strategy, phase 2 will have component 2 as a deliverable that potentially will be a usable output for the community.
The output of a nice, usable, hosted tool for the community is nice, but I worry about the incentives for long term maintenance. As you mention in the proposal, you indicate you'll iterate for 3 months and keep it running for a year. This is a great start, but I think grants are best suited towards efforts that are being run as part of a sustainable, self motivated effort, rather than just funding ongoing development and maintenance. To that end, perhaps think about alternate outputs or milestones that don't require any ongoing maintenance or hosting. For example, could there be a single open source tool that anyone could download and run, that would start up their own "price insight daemon", that would connect to the network and continue to pull data, populating their own local DB and dashboard to view current pricing info? If this were open source, and standalone, then anyone could use it, contribute to it, etc. It could be the backend that powers your own custom UI that you propose as well...but there'd be less risk of all the work become useless if you cease to host the web app.
This totally makes sense. Instead of a fat stack approach which would invariably become a single point of failure, the 3 components follow the lego strategy which are composable together but atomic in their operation.
Goes without saying, all of the work envisioned above will be open source hence the livepeer community would be able to contribute to these individual components at their volition.
In general it feels like there are a lot of unknowns in terms of feature requests and implementation details still. This is totally fine! Part of the process of building this will be figuring those things out. But it might make sense to focus purely on a single shorter term milestone and grant, and then upon achieving it, proposing the next phase, rather than trying to estimate required resources/payouts/etc for all the phases before its even clear how you'll go about getting the initial data into the system.
I agree. I propose to reduce the scope in building components 1 and 2 which will invariably reduce the total grant amount needed. Based on the success of grant 1, I can submit followup proposals to build component 3.
Hope the above makes sense and addresses all your concerns. If not, happy to provide more answers if you have any more questions for me. If yes, let me know what should be the next step in getting started working on this grant 🚀
Thank you for investing your time and providing in depth feedback to my proposal, appreciate it :)
Hey @Saumay-Agrawal - thank you for this great and comprehensive proposal and for the research you've already conducted to help make this tool a reality. The committee met on Friday and, as Doug alluded to, we think this proposal can be pared down in scope. As a phase 1, we'd love to see an MVP. Could you put something together that polls for pricing info and exposes a simple API, with the goal of having something up and running over the course of 1-2 weeks? Our preference is a focus on components 1 and 2 and a bare-bones frontend that demonstrates the API (raw numbers with pricing info would be fine for this first phase). Like you said, based on the success of grant 1, you can always follow up with proposals that build on this one to include analytics, graphql wrappers, data visualizations, etc.
If this is agreeable with you could you follow up with an updated set of technical specifications (ie the API spec), a simple UI mockup, and a time estimate?
Thanks again!
For Phase 1, there is an open PR that modifies the /registeredOrchestrators
endpoint to return pricing info.
The endpoint can be polled to populate a separate DB for historical metrics, perhaps as part of the price insight daemon that's described in Phase 2. That avoids additional changes to go-livepeer itself, which seems best in terms of keeping those concerns separate.
Note that we only populate the DB with active O's when starting the livepeer node as a broadcaster and this endpoint relies on querying the DB.
Hi @adamsoffer,
If this is agreeable with you could you follow up with an updated set of technical specifications (ie the API spec), a simple UI mockup, and a time estimate?
I have revised the proposal keeping in mind the bare minimum requirements in the context of an MVP. Hence, the scope of proposal was reduced to the exact specifics suggested by you. Thank you for your valuable input.
Considering the helpful inputs from @J0sh and @NicoV, I tried the “/registeredOrchestrators” endpoint and found out that the data wasn’t being updated over time as pointed out by NicoV. Hence, I couldn’t find any alternative to the modification of go-livepeer codebase and kept it as the component 1. If the community has some guidelines with respect to the modification of core codebase or if there is any other PR in works for this, please let me know.
I have added the specific components you mentioned (tech specs, UI mockups) below. Goes without saying that other nuances of the proposal like the team, quality of deliverables, and commitment to maintenance remains the same.
Hope this addresses your concerns regarding the proposal. If not, happy to get back with more answers. If yes, please let me know about the steps going forward.
Thank you again for your feedback and suggestions! :)
Modification of go-livepeer codebase to expose the metadata of orchestrators via web server endpoint.
A separate backend API service that will derive the analytics from the webendpoint of component 1.
A bare-bones UI to showcase the use of API endpoints of component 2.
Properties | Type |
---|---|
address | string |
serviceURI | string |
lastRewardRound | integer |
rewardCut | float |
feeShare | float |
delegatedStake | float |
activationRound | integer |
deactivationRound | integer |
active | boolean |
status | string |
pricePerPixel | float |
Properties | Type |
---|---|
time | timestamp |
pricePerPixel | float |
Path | Response | Description |
---|---|---|
/orchestratorStats | [Orchestrator] | Returns current orchestrator statistics from the Livepeer network. |
Path | Response | Description |
---|---|---|
/orchestratorStats* | [Orchestrator] | Returns last updated orchestrator statistics from the DB (created as part of component 2) |
/priceHistory/:orchestratorAddress | [PriceHistory] | Returns pricing history for the specified orchestrator. |
* this API will be hosted on a port different than component 1.
The database should update active orchestrators once every hour https://github.com/livepeer/go-livepeer/blob/ce819a15a616277a7081b5cb91339c4121a22538/discovery/db_discovery.go#L23
Is displaying USD cost or some sort of calculator for transcoding price in scope for this project or will it be based on the per pixel pricing ?
Quick idea
Additions:
The database should update active orchestrators once every hour https://github.com/livepeer/go-livepeer/blob/ce819a15a616277a7081b5cb91339c4121a22538/discovery/db_discovery.go#L23
@kyriediculous, thanks for pointing it out. I checked for this once again on a local node that I setup. The update calls seem to be erratic in nature. Have reached the conclusion by running the node from scratch (removing “.lpdata” folder) 2 times, and recording the time of updation of orchestrator data in local DB for the first time (which differs by the time of DB creation in this case only by seconds) and the time of second orchestrator data update. The difference between the update came to be roughly 40 and 56 minutes for the respective tries (screenshots added below). And hence having an endpoint to access this data directly from the network would be better than relying on the current endpoint which depends on this DB. Let me know your thoughts on this.
Is displaying USD cost or some sort of calculator for transcoding price in scope for this project or will it be based on the per pixel pricing ?
Quick idea
user can select input framerate & resolution & output renditions user can select its max price per pixel we show the $ price per hour based on the max price/pixel and the current ETH price
Additions: Get the median price and lowest price for all active O's that fit the max price , show best case, median and worst case scenarios for transcoding cost
Such ideas are very much in the scope of this project, but not in the scope of the latest proposal. The ideas mentioned come under the category of analytics, and hence were not included in the revised proposal for making something more close to an MVP (polling for the price and exposing them via endpoints on a bare bones UI).
The idea of building a calculator could be realized as GUI implementation of something along the lines of calc.py mentioned in this section of the documentation.(https://livepeer.readthedocs.io/en/latest/transcoding.html#configuring-payment-parameters).
The idea of displaying USD cost would also come under analytics as we are talking about a feature that stays updated with the currency exchange rates and shows results accordingly.
Based on Adam and Doug’s feedback, we will be building this tool iteratively in multiple phases. Goes without saying that the community can feel free to suggest any particular feature that they would like to see on the pricing tool and we would be happy to build it out.
The difference between the update came to be roughly 40 and 56 minutes for the respective tries
@Saumay-Agrawal Pricing doesn't typically change very often, and the granularity would still be limited by the polling interval. Is there a benefit to having a view of the pricing with granularity less than an hour? Especially for an analytics dashboard that's focused on long-term history?
Indeed, pricing changes require manual intervention by the node operator. In a world where orchestrators are educated about pricing it's very likely this would change frequently as @j0sh mentions.
Also keep in mind that in order to even poll data from orchestrators directly the requests need to be signed (provide an ETH address and a signature over the ETH address with the request). The broadcaster node already does this and aggregates the data. It would save a bunch of work in my opinion especially, for an MVP.
As indicated by @j0sh, it will totally be dependent on the trends exhibited by the pricing metric.
Since using current endpoint means synchronising between the current local DB and new DB, usability of current endpoint becomes a function of the granularity of data needed. Based on what @kyriediculous is saying, it would be safe to assume that the pricing trends will happen on a macro level, and hence the current endpoint and its underlying hourly polling mechanism should suffice for polling of the pricing metrics in this case.
Also, since there is a provision to change the granularity of data captured by changing the value of cacheRefreshInterval
, if needed, we can always figure out a way to merge the implementation of current local DB and the pricing history DB. Not to forget, this could happen once the community is able to use a basic MVP for the pricing tool.
The process in itself will generate feedback, fueling further iterations in the product. As far as the scope of the project and the scope of proposal for an MVP are concerned, I would be happy to iterate on any other suggestions or feedback from your side, and get things moving forward. :)
Hey @Saumay-Agrawal - thank you for all the great work you’ve already put into this proposal. We’re pleased to grant you 3,125 LPT. 1041.66 to be dispersed upon the completion of each deliverable.
Deliverable 1 (1041.66 LPT) — Modification of go-livepeer codebase to expose the metadata of orchestrators via web server endpoint. Deliverable 2 (1041.66 LPT) — A separate backend API service that will derive the analytics from the web endpoint of component 1. Deliverable 3 (1041.66 LPT) — A bare-bones UI to showcase the use of API endpoints of component 2.
We’d like to grant an additional 875 LPT to be streamed using Sablier over the course of six months following completion of the deliverables to ensure the hosted API is maintained. Maintenance would include bug fixes and ensuring uptime. The committee would reserve the right to cancel the maintenance stream if critical bug fixes or downtime are not addressed within one week of being reported.
In addition, we would love to see the tool being a huge success in the Livepeer ecosystem. So we have put aside 1000LPT as an additional incentive for your team. Our current thought of measuring success is through a survey after the tool has been built and launched, and 50% of broadcasters and orchestrators who answer the survey use this tool. But we are happy to work with you on the right metric to help measure our success with this project.
Please let us know if you have any questions. We’re excited to see this get built!
For those following with this proposal, I just wanted to share an update. Per email correspondence with Saumay, we decided it makes sense to pair down the milestones down to two. The first milestone (the pricing tool API) will be shared back with the community by April 7th with the second milestone shortly thereafter (a pricing tool user interface). The grant program will send BUIDL labs $1920 in LPT for each delivered milestone, an additional $1000 in LPT based on the success metric described above, and an additional $600 in LPT for maintenance to be streamed over Sablier. The amount of LPT will be based on a 30-day price average at the time of transfer.
Thanks Saumay. Looking forward to the API!
Hey @adamsoffer, the pricing tool API is ready!
The endpoints of the hosted demo can be accessed at:
Here is the repo if you or anyone else is interested in digging into the code.
Also, we are running this API on top of a broadcaster node. However, the JSON payload sent by the node has Price Per Pixel set to “0” for every orchestrator. The results remained the same even over a testing period of 2 days. I followed the instructions given here for setting up a broadcaster node on the mainnet. I am suspecting the issue to be somewhere around the broadcaster node configuration. Can you or someone else from the community help me in identifying the root cause of this issue?
Good start. I took a look at the API output, but as you mentioned, the price per pixel is showing zero for all the nodes. I believe the purpose of this tool is to accurately (as best as possible) report the prices quoted by each node recently, so it would seem that's a big missing feature ;) How are you going about trying to identify the price of each node?
I understand your concerns @dob and it’s a big missing feature indeed. The API merely reprocesses the data exposed by the broadcaster node from its own local DB. I was able to get a link to this Discord conversation, showcasing that the broadcaster node does store the Price Per Pixel values for every orchestrator node.
Hence, I realize that I have probably missed out on something in the broadcaster node setup. I have followed the steps given here for setting up a broadcaster node. I’m hoping if someone from the community could help me out in resolving this issue, and I could move forward to #buidl the next component of the pricing tool, that is the UI. :)
Looking good :) Let's try and figure out the issue with your broadcaster node setup. @kyriediculous any idea why the Price Per Pixel would be returning “0” in the JSON payload?
Looks like an underlying problem with how the API handles the data
The data returned for pricePerPixel is the stringified version of a *big.Rat
in go so it is a fraction.
It only gets converted to a float string (what is shown in the CLI) after we get the response back (we use eth/types.Transcoder
and marshal that type to JSON).
This addition was only meant as an mvp to show an indication of pricing in the CLI, but since a rework of the CLI is still far out this is what will have to do for now.
For handling this I suggest splitting the string on /
then make the division and truncate
{
"PricePerPixel" : "211173/1000",
"Active" : true,
"FeeShare" : 29900,
"RewardCut" : 65000,
"DeactivationRound" : "115792089237316195423570985008687907853269984665640564039457584007913129639935",
"Address" : "0xfb9849b0b53f66b747bfa47396964a3fa22400a0",
"Status" : "Registered",
"ServiceURI" : "https://18.216.204.22:8935",
"ActivationRound" : 1611,
"DelegatedStake" : "461918071316252511997407",
"LastRewardRound" : 1707
},
I think I have found the issue here.
@kyriediculous I backpropagated from the difference in your JSON payload and mine from the same endpoint ie localhost:7935/registeredOrchestrators
. Your's contain PricePerPixel values as float strings ("211173/1000") and mine contain "0"s. And so does the local DB of the broadcaster node. Hence the issue should be around fetching the PricePerPixel values from the mainnet.
On digging through the logs, I found errors like these
Could not get orchestrator err=rpc error: code = Unknown desc = insufficient sender reserve
Are PricePerPixel values being left out due to this when broadcaster polls for this data?
If this is the issue, what could be done to resolve this?
The broadcaster's way for fetching off-chain information to populate the DB is depending on the discovery endpoints also used for transcoding. That means that if the request errors , eg due to not having enough reserve so no ticket params can be generated for you, there will be no data inserted into the DB. tl;dr you need broadcasting funds.
I'm very reluctant to expose other gRPC methods purely for third party services at this point in time, if ever.
Perhaps if we ever rework the CLI server to work with auth we can have O's choose to expose certain public endpoints that third party services can use. This is currently not something for the very near future.
I know it's not optimal to deposit like 1 ETH of reserve for a node that's just being used as an endpoint, but yeah...
Got it. Is 1 ETH the minimum possible amount qualifying for "enough reserve" or is it a safe estimate?
With 1 ETH and 100 O's (assume the list is full) this would give a reserve allocation of 0.01 ETH per orchestrator , this value should be greater than whatever the O sets as ticketEV
which defaults to 0.000001 ETH (1000 gwei). So perhaps 1 ETH is a bit much but definitely a safe estimate, perhaps something like 0.1 ETH could do for most Os for the time being.
I don't think you'll need a deposit, off of the top of my head.
I ran a broadcaster node on Rinkeby testnet with 3 ETH in deposit and 3 ETH in reserve. I was able to get Price per pixel value for 1 orchestrator out of 34 orchestrators. Sharing a sample of JSON payload for 3 orchestrators below
{
"Address": "0xfebf35b94e16e018ab5ca9aa1f66aac1a1ab29e0",
"ServiceURI": "https://52.15.215.146:8935",
"LastRewardRound": 47991,
"RewardCut": 90000,
"FeeShare": 50000,
"DelegatedStake": 1.880877479264269e+25,
"ActivationRound": 44523,
"DeactivationRound": 1.157920892373162e+77,
"Active": true,
"Status": "Registered",
"PricePerPixel": "0"
},
{
"Address": "0x916327a01d6469fd24267c180ec38368a69f1e81",
"ServiceURI": "https://18.223.238.137:8935",
"LastRewardRound": 52374,
"RewardCut": 100000,
"FeeShare": 50000,
"DelegatedStake": 1.3750037498555517e+25,
"ActivationRound": 44493,
"DeactivationRound": 1.157920892373162e+77,
"Active": true,
"Status": "Registered",
"PricePerPixel": "101/100"
},
{
"Address": "0x02b3b4790934dbfb7027723bd1e9d2c2aa8a7ea6",
"ServiceURI": "https://129.204.210.145:8935",
"LastRewardRound": 48191,
"RewardCut": 500000,
"FeeShare": 600000,
"DelegatedStake": 163468568478652900000,
"ActivationRound": 44891,
"DeactivationRound": 1.157920892373162e+77,
"Active": true,
"Status": "Registered",
"PricePerPixel": "0"
}
I'm listing down my concerns and queries below:
It seems that some amount needs to be in deposit/reserve for broadcaster node to be able to fetch the price per pixel values from the network.
Also, I could see a change in the logs as well. The logs now showcase mixed results for fetching data from the orchestrators. I'm sharing the logs (http://notepad.pw/broadcasterRinkebyLogs) for your reference.
Does a repeat of error Could not get orchestrator orch=https://75.142.220.31:8935 err=rpc error: code = Unknown desc = insufficient sender reserve
mean that the fetching of PPP values is somehow dependent on the amount of ETH deposited/reserved? If this is true, then how can one identify the minimum amount needed in deposit/reserve to be able to view PPP values from all the orchestrators?
What does this error Did not connect to orch=https://109.108.80.164:8935 err=context deadline exceeded
mean? And how can it be resolved?
@kyriediculous since you have more awareness about the inner workings of the system here, can you help me with the minimum steps for setting up a broadcaster node and fulfilling the purpose of fetching PPP values from all the orchestrators?
The resulting solution could be documented (along with the pricing tool documentation for instance). I see that it will be of immense value for new adopters of the network as well.
- Is there some way to bypass this? This, as you have already pointed out, doesn't seem optimal for a node hosted only for viewing data (like in our case).
No way to bypass this and I don't think this should be bypassed. If you run a broadcaster node for your own purpose or as a service and run a pricing API exposed to the outside world on top of that , perfect! but we shouldn't add support for running nodes that are only meant to burden orchestrators with requests for data.
Does a repeat of error Could not get orchestrator orch=https://75.142.220.31:8935 err=rpc error: code = Unknown desc = insufficient sender reserve mean that the fetching of PPP values is somehow dependent on the amount of ETH deposited/reserved? If this is true, then how can one identify the minimum amount needed in deposit/reserve to be able to view PPP values from all the orchestrators?
Please see my comment above re:reserve / numOrchs > ticketEV
What does this error Did not connect to orch=https://109.108.80.164:8935 err=context deadline exceeded mean? And how can it be resolved?
It can't, the node is simply offline.
As with any error the data is not available and thus any orchestrator with pricePerPixel "0" should be excluded from the API as this indicates the data couldn't be fetched.
You are using rinkeby, most orchestrators on rinkeby are not online so they will not return pricing information.
Thanks for clarifying things, Nico!
I have reconfigured the node and deployed the API again. The PPP values are now visible @ http://35.223.32.189:9000/orchestratorStats.
As with any error the data is not available and thus any orchestrator with pricePerPixel "0" should be excluded from the API as this indicates the data couldn't be fetched.
What would be your opinion on keeping such orchestrators in the API and marking them as "unreachable" in the UI? It could be achieved via color-coding such orchestrators differently. This would give an overall status of all the orchestrators within the tool itself.
Also, I'm curious to know how are negative PPP values possible? And Is there a need to handle them differently like "0" PPP values?
Also, I'm curious to know how are negative PPP values possible? And Is there a need to handle them differently like "0" PPP values?
I know this is the case. There's an overflow issue there which is being fixed.
What would be your opinion on keeping such orchestrators in the API and marking them as "unreachable" in the UI? It could be achieved via color-coding such orchestrators differently. This would give an overall status of all the orchestrators within the tool itself.
I think the answer is depending on your UI requirements. Generically speaking, I would conclude with saying that an orchestrator that (1) isn't responsive will not be usable by anyone else either, so showing pricing information for those might lead to confusion of what is available on the network (2) responds with 'insufficient reserve' error, if the reserve for running your node is sensible then the O Might have set a ticketEV that is just too high for most other users as well.
Happy to hear it's working by the way.
Based on the insights from this discussion, I have updated the /orchestratorStats
endpoint to take an excludeUnavailable
query parameter.
false
Results can be viewed at:
Thanks again Nico! :)
Looking good. I saw some discussion above about the PPP being displayed as a fraction (and sometimes with a negative numerator?). Is there a transformation that can be performed before returning the result so it is given as a wei/pixel float or int value rather than a fraction?
@dob the negative fractions are due to the fact that our protobuf messages declare these values as int64, but the actual orchestrator price differs from the baseprice (tx cost overhead) which adds a percantage to the base price. This can lead to fractions where we don't find a common denominator and we end up with very large numerators and very large denominators (or either). This causes an integer overflow when we use int64 for the protobuf messages. I discussed this with Yondon today and this is being fixed.
Displaying as a float should be fairly simple and is described above, I agree that the end value produced by the API should be a float/floatstring.
For handling this I suggest splitting the string on / then make the division and truncate
Sure! I'll make these updates to the API.
Please let me know if there is any other feedback on the API. I'll incorporate that as well.
Hey @ community,
I'm glad to get back with updates on the Pricing Tool:
Hey @Saumay-Agrawal, the API endpoints look great! Fantastic work. I've spoken to Nico and the negative price issue should be resolved in an upcoming go-livepeer
release.
Could you share an ETH address to send the LPT to for milestone 1?
Re: the UI...looking solid! One thing I noticed that's probably already on your radar is that there are no dates on the x-axis on the chart.
Hey @adamsoffer,
the API endpoints look great! Fantastic work. I've spoken to Nico and the negative price issue should be resolved in an upcoming go-livepeer release.
Great! I'm glad that you like our work. :)
Could you share an ETH address to send the LPT to for milestone 1?
The ETH address would be 0xacE490e46D53Bb84b339624C7Eba59A85909675e.
Re: the UI...looking solid! One thing I noticed that's probably already on your radar is that there are no dates on the x-axis on the chart.
Thanks for pointing this out. The problem seems to be occurring on Brave and we are working on its resolution. Let me know if you are using a different browser.
Thanks @Saumay-Agrawal. The Grant Program orchestrator has submitted an unbond transaction and at the end of the unbonding period (~7 days) it will transfer 3093.5788903 LPT for milestone 1 based on this 30-day price average (0.6206403871).
Nice ! As soon as all orchestrators upgrade to the upcoming 0.5.6 release you should see the negative numbers fade away.
Hey @ community,
I'm glad to inform that the final version of UI is up and running at http://35.223.32.189:3000/ as per the second milestone's scope. For viewing on Brave some additional browser settings are required, instructions for which could be found in the UI itself. Please let me know if there is any feedback around the UI.
I'm happy to see the new release of go-livepeer
as well. :)
@Saumay-Agrawal this looks really great 😀Want to try updating to the latest release of go-livepeer and see if that fixes the negative price values?
By the way I'm not seeing any rendering issues on Brave, so you could perhaps remove that warning.
Hey @adamsoffer!
Want to try updating to the latest release of go-livepeer and see if that fixes the negative price values?
I have updated to the latest release of go-livepeer(v0.5.7). However, some orchestrators still show negative price values. Could it be possible that all orchestrators haven't updated to the latest version yet? If yes, then I think this would be the right moment for us to reach out to the orchestrators via a discord channel. :)
By the way I'm not seeing any rendering issues on Brave, so you could perhaps remove that warning.
In our testing, we found that some UI elements specific to the price history graphs (tooltips, axes, etc) worked unpredictably on Brave if the recommended configuration isn't done. The usability of price history graphs depends on it. Hence I would recommend keeping the warning popup, just in case anyone faces a similar issue.
Also, we have received the LPT for the first milestone. Thanks!
@adamsoffer @Saumay-Agrawal , just to clarify, getting rid of negative prices requires node updates from O's , not the B node
As soon as all orchestrators upgrade to the upcoming 0.5.6 release you should see the negative numbers fade away.
@Saumay-Agrawal Looks like milestone 2 is complete! Thanks for the awesome work; the committee is thrilled with the result. We'll reach out to the O's who need to update to the latest version of go-livepeer to get rid of the remaining negative prices.
We unbonded LPT from the Grants Orchestrator for milestone 2 based on the 30 day price average between 3.24.20 and 4.23-20.
The committee will write up and conduct the survey within the nexts few weeks or once we feel enough O's and B's have had a chance to use the tool. We'll share the survey questions and results here.
We're excited to share this tool with the rest of the community. Would you be open to joining the next community call to talk about it?
@Saumay-Agrawal Looks like milestone 2 is complete! Thanks for the awesome work; the committee is thrilled with the result. We'll reach out to the O's who need to update to the latest version of go-livepeer to get rid of the remaining negative prices.
Awesome! I am glad you like our work. Our team as a whole is very open to feedback, constructive criticism in how we can make our engagements more fruitful, so feel free to give us any feedback that might have crossed your mind :)
We'll reach out to the O's who need to update to the latest version of go-livepeer to get rid of the remaining negative prices.
We unbonded LPT from the Grants Orchestrator for milestone 2 and maintenance based on the 30 day price average between 3.24.20 and 4.23-20.
The committee will write up and conduct the survey within the nexts few weeks or once we feel enough O's and B's have had a chance to use the tool. We'll share the survey questions and results here.
Got it. Looking forward to the experience when the Livepeer ecosystem onboards onto the tool!
We're excited to share this tool with the rest of the community. Would you be open to joining the next community call to talk about it?
Definitely. I would love to join!
Since you would have better vantage on people who join the community calls, let me know if you have any tips to calibrate the presentation.
Two spectrums I have generally noticed: Spectrum 1: crisp to elaborate presentation. Where does the presentation ought to be? Spectrum 2: average joe to highly technical. How in depth should the presentation be?
Where does the presentation lie in both the spectrums above helps me better serve the listeners :)
Awesome! I am glad you like our work. Our team as a whole is very open to feedback, constructive criticism in how we can make our engagements more fruitful, so feel free to give us any feedback that might have crossed your mind :)
The committee met on Friday and we discussed the idea of including a simple page with API documentation for other developers interested in building on top of it, since the focus of this grant was around the API. What do you think?
The other thing we discussed is putting a heavier focus around pricing in the UI. Right now the table layout looks similar to the orchestrators view on the protocol explorer, and from a product perspective, we think it would be helpful to differentiate this app and frame it more as a pricing tool as opposed to a protocol explorer. Some ideas to achieve this:
Lastly, any plans to put the UI behind a domain?
Since you would have better vantage on people who join the community calls, let me know if you have any tips to calibrate the presentation.
The calls are generally pretty relaxed. Anywhere between 15 and 30 people usually attend and they are an hour long. We have 3 different types of quarterly calls once per month (a topic call, a project update call and an infra provider call). It's usually a mix between tokenholders, orchestrators, contributors/developers, and crypto-enthusiasts. I think a quick overview covering the following would be great:
I think the next infra provider call (next month I believe) would be a good opportunity to share. We'll keep you posted!
Hi @adamsoffer,
We totally agree with keeping the tool more focused around the pricing, and are working on the feedback. We will be making the recommended updates soon.
For introducing the fees earned column, I was deep diving for the source of this data. The data isn't fetched and stored in the local DB of broadcaster node. I also tried the API hosted at the Graph, but it's returning null values.
However, I can see the data being visualised on the Scout's dashboard @ https://scout.cool/livepeer/mainnet.
It would be great help if you could point me to the source of this data.
I think the next infra provider call (next month I believe) would be a good opportunity to share. We'll keep you posted!
Great! Looking forward to this call. :)
@Saumay-Agrawal great! That's the right query. A lot of registered O's simply haven't generated any fees yet. Try ordering by totalGeneratedFees
and you'll see the ones that did at the top.
{
transcoders(orderBy: totalGeneratedFees) {
totalGeneratedFees
}
}
Hey @Saumay-Agrawal - just want to give you a heads up the next quarterly transcoder call is June 11. Think we can get these updates in before then?
Hi @adamsoffer apologies for the radio silence from @Saumay-Agrawal and BUIDL Labs.
Stepping in for him, since he is currently out of office due to an emergency.
He will be stepping in this week. Irrespective I promise to get all issues resolved from our end tentatively by June 5 or June 7 giving you enough time before the call to review last minute changes.
Okay thanks for the update @prastut
Hey @prastut , sorry to hear that. I hope everything is well.
I submitted a small PR to fix the pricing information but also left some feedback in that PR that might be useful to improve on the MVP now that I got the opportunity to look through the code.
I think the full E2E MVP product is a little higher priority right now but it should help you along hopefully !
Hey @adamsoffer and @kyriediculous,
An updated version of the pricing tool is up and running at http://livepeer-pricingtool.surge.sh/.
The updates include:
Please let me know if there's any feedback. :)
Overview
The Livepeer network contains various nodes (broadcasters, delegators, orchestrators/transcoders, and end-users), which work together to constitute a decentralized streaming service. The orchestrators/transcoders are the heart of this network. These nodes handle the video transcoding operations and help the broadcasters in reaching out to users across various platforms. However, there doesn’t exist a solution for getting an overview of prices and fees being charged for the transcoding process in the network. We aim to fill this void via the development of a Price Monitoring Tool for the network. The tool will be built around but not be limited to metrics like “price per unit”, “pixels per unit”, “number of pixels encoded” etc.
Deliverables
Roadmap
Phase 1 - Research Phase
This phase has already been completed before submitting the proposal.
Phase 2 - Data Aggregation (2 Weeks)
Under this phase, the problem of raw data will be solved. The necessary steps will include:
End result - a one-stop-shop database for all the data related to the Fees and Pricing involved in the transcoding process.
Phase 3 - Data Analytics (2 Weeks)
In this phase, the foundation of entire analytics involved in the Pricing Tool will be laid down. The key activities include:
End result - An API for directly accessing various analytics related to Fees and Pricing involved in the transcoding process.
Phase 4 - Data Visualisation (4 Weeks)
In this phase, the Pricing Tools dashboard will be implemented. Key steps include:
End result - A pricing tool that enhances the relationship between the broadcasters and the orchestrators.
Phase 5 - Q/A and Bug Fixes (2 Weeks)
Maintenance Plans
Upgrade Plans
Total Budget Requested
Team
Contact Info
Email: saumay@thevantageproject.com
About the Team
Our team comprises of people having a background in theory-based visual design and analytical methods. We like to discover and validate insights from data, then translate them into systems, processes, and frameworks that help in mindfully created human-information interactions.
The advent of high technology information, imaging, networking, mobile devices, and social media systems has fostered a modern renaissance in visualization. Just as the great artists of the European Renaissance were also designers, inventors, scientists & architects, we think the modern visualizers have an essential role to play in decoding the increasingly complex world and make it accessible to all humans.
Curiosity led us down the blockchain ecosystem rabbit hole. We wanted to analyze the organic activity happening on these public blockchains but the reality is that the format of these public datasets make it very difficult to analyze, understand and then derive insights.
Hence, we want to build tools that can help in solving this problem and give users means to make informed decisions without relying on any third party. Visualizations are one part of this toolkit.
Team Website: https://www.thevantageproject.com/buidl/
Team Members
ProofOfWork
Similar projects Github repo links: