Open geckoslair opened 6 years ago
It is a great idea, very much in the ipfs spirit.
The problem is only that ipfs is much more slower than http - so transfering the data over ipfs would just slow it down. So that is not good (but perhaps we can send the data themselves together with the response)
What is interesting is that such a thing could help with decentralizing also the paratii-db - i.e. there would be the central paratii-db service that seeds ipfs hashes of query results, but even if the service goes away, clients could still get the query results from peers. (to work like that we would have to invent more details, but it is intriguing)
To see if I get the idea right: the DB would normally respond over http, but still upload to IPFS a hash of the response payload and expiry date for the query? And then nodes could query for peers that have cached the response for a particular query they want to repeat, besides going directly to the DB over http?
This may be relevant here https://github.com/ipfs/notes/issues/161
Yes this is the idea, I've discussed with @jellegerbrandy and it comes out that we need also to know what is the query/key in order to get the right ipfs cached payload, that means that we need to register on a contract, called Index.sol or whatever, something like:
{
query : '{keyword: "cat"}', -> the query
ipfsPayload: 'hash to the ipfs payload',
ttl: 'a time that tells how long this payload is valid'.
}
this will bring the db to a more decentralized state.
Something like: