Open UmanShahzad opened 3 years ago
I will add soon. But there are two ways - sequentially receiving 1000 at a time or splitting the request into parallel ones? Will the backend handle massive parallel requests?
The backend can handle lots of requests in parallel. We can limit the concurrency level in the client, e.g. max 10 batches of 1000-sized requests by default.
Take into account use cases like https://github.com/ipinfo/python/issues/35 to ensure batch ops support is scalable & robust.