We are currently in progress to try to get sub-requests limit. However, currently we are limited by CF 1000 sub-requests.
A break-down of these requests are:
DUDEWHERE bucket list (pages of size 1000)
Read one index per CAR file from SATNAV
Read blocks from CAR files from CARPARK
There are two ways where this can be problematic:
Fail to fulfil requests because we hit sub-requests limit before returning 200 (find root CID block)
Go out of sub-requests while already streaming content post returning 200
In the first case, w3link will fallback to ipfs.io and we still are able to provide a response, even though slow. If we hit limits while already streaming it will fail unexpectedly to user and also with no visibility for us.
With the above in mind, and specially targeting second issue, freeway should fail if it will not be able to fully fulfil the request. Considering the known limits, we will perform one single request to DUDEWHERE to get a list of 1000 CARs. From the returned CARs, we will need to read the indexes of all of them, as well as do multiple reads from each CAR file.
We currently have no metrics to take a decision on what maximum value should be. If we consider a maximum support of 250 CARs in Freeway, this will mean 1 Request to DUDEWHERE, 250 Requests to SATNAV and N to CARPARK. Also assuming around 10 blocks per CAR, and an average of 5 request to get all the blocks in a CAR file, we would hit the limit.
Therefore, my proposal would be to start with 250 maximum support. For current clients, this means sizes up to 2.5Gb for default chunking of 10Mb. In the meantime, we are trying to make limits bigger.
We are currently in progress to try to get sub-requests limit. However, currently we are limited by CF 1000 sub-requests.
A break-down of these requests are:
There are two ways where this can be problematic:
In the first case, w3link will fallback to ipfs.io and we still are able to provide a response, even though slow. If we hit limits while already streaming it will fail unexpectedly to user and also with no visibility for us.
With the above in mind, and specially targeting second issue, freeway should fail if it will not be able to fully fulfil the request. Considering the known limits, we will perform one single request to DUDEWHERE to get a list of 1000 CARs. From the returned CARs, we will need to read the indexes of all of them, as well as do multiple reads from each CAR file.
We currently have no metrics to take a decision on what maximum value should be. If we consider a maximum support of 250 CARs in Freeway, this will mean 1 Request to DUDEWHERE, 250 Requests to SATNAV and N to CARPARK. Also assuming around 10 blocks per CAR, and an average of 5 request to get all the blocks in a CAR file, we would hit the limit.
Therefore, my proposal would be to start with 250 maximum support. For current clients, this means sizes up to 2.5Gb for default chunking of 10Mb. In the meantime, we are trying to make limits bigger.