Closed hackfisher closed 4 years ago
It doesn't matter that this proxy service will be implemented in which programming language. But offchain worker pallets and a sample implementation should be accomplished for the scope of this issue, which can be used for integration test or productions usages.
Offchain worker will fetch data from an non-exist domain eth-resource
. and the comunication protocal will follow the JSON-RPC standard of ethereum.
https://github.com/ethereum/wiki/wiki/JSON-RPC#eth_getblockbyhash
It is more than a proxy(such as nginx) because there could be more requirement such as data process and proof generating, so let's treat it as a offchain-worker companion service.
Yup, the return fields can be more than JSON-RPC standard, so we can custimized and get more data as our needs, and still get compatiable with standard.
I think we don't need to proxy
, just run an Ethereum light node, read the database and proof the headers will solve all problems.
For example, the craber
(give the service a name) has one simple API for the first version:
GET
/block/{hash or height}
:
{
"header": {},
"proof": {}
}
meeting minutes with @clearloop
The implementation will be two stages with a non-exsist domain eth-resource
I am preparing the doc which assume that the service is already run on port 8000.
This document should be part of README, and there is a dependency on issue #6. Because I am not familiar with this project, so the issue #6 may be completed by others, and I am willing to review this. Such that People know about how to set up and run the shadow service in any port he like at first.
And then I will add a new section in the README about the URL connecting part and the proxy setting part, and help validator to set up all of this. Such that everything will be clear without misleading.
Currently, URL and payload method are hardcoded in substrate offchain pallet
It is not easy for the validators to use customized data source, and also not easy for us to develop new sources for them. We should decouple these design by using a proxy. (e.g. a localhost service)
So the validators can easily config the data source they want, and community developers can fork and develop new proxies for the offchain worker.
For the proxies, there could provide two kinds of services:
Receive the request from substrate offchain worker and redirect to target data source, after getting datas, it will return back the the offchain worker call.
After getting datas, it can directly insert datas to offchain database using rpc call, so that the offchain worker can directly get the data for substate offchain db. https://github.com/darwinia-network/darwinia-common/issues/53
For the offchain worker design, it should provide stable apis/spec for the requests and payloads to the proxy, which should not change rapidly, in this way, different proxy implementation can happen based on this.