Implements a few bug fixes causing errors for large transfers:
A basic backpressure mechanism, so that if the queues on a gateway are full, the chunk_requests POST request will return how many chunks were added and the current queue size, informing the HTTP client making the request to send the remaining chunks (those not added) to a different gateway or to wait and try again. With this change, I was able to transfer 1TB.
This also reduces the total number of HTTP connections per gateway to be 64, as opposed to 32 per destination, which seems to have been causing issues.
Empty chunks are allowed, since object stores can have empty folders which we still want transferred
There are still issues for SSH connections for long running transfers, and listing files can take an extremely long time on the client (#841), so these issues need to be fixed to for very large transfers.
Implements a few bug fixes causing errors for large transfers:
chunk_requests
POST request will return how many chunks were added and the current queue size, informing the HTTP client making the request to send the remaining chunks (those not added) to a different gateway or to wait and try again. With this change, I was able to transfer 1TB.There are still issues for SSH connections for long running transfers, and listing files can take an extremely long time on the client (#841), so these issues need to be fixed to for very large transfers.