Open yondonfu opened 4 years ago
The main issue I see with this is that different providers can have different eth_call
gas limits. For example infura has a pretty low one of 20-ish million.
As long as we can stay under that we should be fine though for most users.
Good point about the gas cap on eth_call
. Updated the OP to note that we should benchmark the gas cost of the batch functions exposed by the viewer contract.
One other point to note is that the bottleneck now becomes EVM execution speed instead of RPC calls.
I have written a Viewer contract just now which we could use for getting the stakes and the transcoder pool.
I still have concerns about gas limits on calls from several RPC Providers on aggregating data from several different functions for 100 orchestrators.
Now that the ServiceURI
is included in the subgraph Transcoder
entity, we could also use the subgraph to query the transcoder pool. The downside here is that the subgraph service itself hasn't been the most reliable and is sometimes unaccessible which could affect a node's ability to start up. WDYT ?
For local setups we can still use regular RPC calls if no subgraph is defined on startup.
I quickly prototyped something up , tested in manually with a mainnet broadcaster node.
It's an extensible subgraph client although it currently only implements a single function.
https://github.com/livepeer/go-livepeer/tree/nv/subgraph-transcoderpool
Misses unit tests
I do see reasons why we'd want a viewer contract instead though so interested in what your opinion is here.
Some initial benchmarking results using the mainnet orchestrator pool. This data is for fetching the on-chain data for all transcoders in the TranscoderPool
, it does not include separately querying each transcoder for its offchain data.
baseline (current release)
subgraph
Viewer contract
The subgraph seems to win out in speed as a "speed up option"
subgraph
Viewer contract
This is slightly opinionated but I'd say the subgraph integration also wins in this category
Subgraph
Viewer contract
eth_call
to get the transcoder pool. This is OK for infura but unclear how well this works with other providers or self-hosted nodes (can't find any immediate info for geth) The current gas for the eth_call
would be okay as it's barely under the block gas limit, however if we were to increase the transcoderpool size it is uncertain how this will will affect usage with services other than infura, or self-hosted ethereum nodes.
From the benchmarks as well as "other considerations" the best way to achieve a direct speed-up is to use the subgraph which gives a 100x reduction in speed fetching the transcoder pool.
The current prototype makes usage of the feature optional (when providing the "subgraph" flag) and when the subgraph is unavailable we can still use good old RPC calls to start the node.
We can still use a viewer contract for other solutions (such as caching stakes each round) to reduce RPC calls however I think we can also use the subgraph's Pool
entity to accomplish this.
Thus unless you have other objections @yondonfu , I think it makes sense to continue with the subgraph integration; adding unit tests for the current functionality and scoping fetching stakes upon round init into a different issue that would also use the subgraph.
As mentioned during the planning meeting. The subgraph and a viewer contract aren't mutually exclusive features.
The workflow would be
TranscoderPool
using subgraph (if subgraph
flag is specified on node startup)TranscoderPool
A viewer contract could easily be fit in here as well:
TranscoderPool
using subgraph (if subgraph
flag is specified on node startup)TranscoderPool
TranscoderPool
For now though if the subgraph seemingly works well as an optional feature to speed up the node's operations I deem that to be sufficient. The subgraph
flag makes it opt-in for users to use this hosted service.
Currently, all clients interacting with the BondingManager need to submit N RPC requests in order to fetch the current on-chain transcoder pool (example from go-livepeer) where N is the size of the on-chain transcoder pool. Furthermore, oftentimes, clients need other on-chain data about a transcoder (i.e. active stake, total stake, service URI) in addition to the transcoder's address. At the moment, a client needs to send separate RPC requests to fetch this on-chain data. Reducing the # of RPC requests required for these situations would help clients that depend on rate limited ETH RPC providers (i.e. Infura) and would also reduce the execution time required to fetch relevant on-chain data about the transcoder pool [1].
One way to reduce the # of RPC requests in these situations could be to deploy a "viewer" contract. This contract would read data from the BondingManager and could batch together function calls that would otherwise need to be executed on the BondingManager individually into a single function call. Clients would then interact with this viewer contract instead of directly interacting with the BondingManager at least for the on-chain data that can be fetched via the viewer contract. To address the situations described above, the viewer contract could expose a function that loops through the transcoder pool and returns all relevant on-chain data for each of the pool addresses.
Here is an example of what the viewer contract might look like:
An additional function that could be useful could be one that accepts a list of addresses (instead of using the addresses in the transcoder pool) and returns all relevant on-chain data for each address. This function could be used by a client that needs to fetch the stake for multiple addresses at a regular interval (i.e. at the beginning of a round).
[1] Batching operations into a single function call will reduce the # of RPC requests, but it will increase the amount of steps executed in the EVM. My guess is that the overhead from EVM step execution will be less than the execution time saved by not submitting multiple RPC requests, but we should validate this. We should also benchmark the gas cost of the functions exposed by the viewer contract to make sure that it is below the gas cap for
eth_call
imposed by certain RPC providers such as Infura.