Closed belikor closed 3 years ago
@lyoshenka this PR involves API changes, please review
it works when the number of claims is relatively small; however, once the number of claims is large, more than 500 or so, the Daemon.jsonrpc_file_read method will time out,
Is there a way to increase the time out? I wonder if I can just pass the --timeout
option all the way to the jsonrpc_get
method. The idea is that if we pass a file with an arbitrary number of claims, say 5000, the method will process every single item.
I'd prefer not to add this feature. It can be accomplished with a few lines of scripting, and as you pointed out it doesn't work when there are many claims (at which point you fall back to scripting anyway).
As I said in https://github.com/lbryio/lbry-sdk/pull/3422#issuecomment-924200281, we should aim to keep the API simple.
This follows after #3422.
The idea with #3422 is to produce a file with a list of claims. With this pull request we take that written file, parse it to get the claim IDs, and then download each of the streams. The file is a comma-separated values (CSV) file, although by default we use the semicolon
;
as separator.Basically, the idea is that we can share lists of claims to other users of the LBRY network, and they can import these lists into their own computers (through
lbrynet
or the LBRY Desktop application) so that they can download the same claims that we have, and thus help seed the same content that we are seeding.This is a prototype implementation; it works when the number of claims is relatively small; however, once the number of claims is large, more than 500 or so, the
Daemon.jsonrpc_file_read
method will time out, so it won't finish processing the list. I'm not sure what can be done to make sure it processes a big list without timeouts.The obvious solution is to not implement this in the SDK itself, but parse the file, and call
lbrynet get
on each of the claims.Then each call to
get
will be separate from each other, each will have its own timeout.Also, since the file is meant to contain the
'claim_id'
,get
should be able to handle claim IDs, as proposed in #3411.