Closed ProofOfKeags closed 3 years ago
You mean if there's 1000 requests in a batch, 1000 peers are contacted an the idea is to make it lower?
If a block is requested and it is pruned by the underlying instance of Core, the proxy will reach out to all archive peers for the block simultaneously. This consumes network bandwidth that scales with the number of peers that your node has.
Ah, I see, this is really not good. I'd suggest make it configurable, with a small default (around 5). If none of those attempts succeeds, try another batch.
As far as I understand, #6 fixed this.
Currently we race archive peers to fetch blocks that have been pruned. We should either cap the number of peers we do this for, or we should scale logarithmically or abandon parallel fetches altogether for an in-series fetch approach.