We are using the cross-fetch package as part of the http-client for the Ceramic protocol. We've noticed that if we try to fetch from the same http endpoint a few thousand times in a row (as part of the client loading a few thousand streams from the same Ceramic server node) that the number of open sockets/file descriptors on the client machine performing the request goes up according to how many requests we make. It seems like each new request is making a brand new TCP connection even though all the requests are going to the exact same server.
Ideally if we have 1000 back to back requests to the same server, they should all be done over the same connection, rather than requiring 1000 brand new TCP connections.
We can work around the issues we are seeing in our tests by just upping the max open file descriptors limit in the OS, but that's not ideal as there's still the overhead of establishing all those unnecessary connections.
We are using the cross-fetch package as part of the http-client for the Ceramic protocol. We've noticed that if we try to fetch from the same http endpoint a few thousand times in a row (as part of the client loading a few thousand streams from the same Ceramic server node) that the number of open sockets/file descriptors on the client machine performing the request goes up according to how many requests we make. It seems like each new request is making a brand new TCP connection even though all the requests are going to the exact same server.
Ideally if we have 1000 back to back requests to the same server, they should all be done over the same connection, rather than requiring 1000 brand new TCP connections.
We can work around the issues we are seeing in our tests by just upping the max open file descriptors limit in the OS, but that's not ideal as there's still the overhead of establishing all those unnecessary connections.