To explore the impact on performance, I want to expose a configuration kwarg for connection_pool_maxsize on Index.
Solution
This connection_pool_maxsize value is passed in to urllib3.PoolManager as maxsize. This param controls how many connections are cached for a given host. If we are using a large number of threads to increase parallelism but this maxsize value is relatively small, we can end up taking unnecessary overhead to establish and discard connections beyond the maxsize that are being cached.
By default connection_pool_maxsize is set to multiprocessing.cpu_count() * 5. In Google colab, cpu count is only 2 so this is fairly limiting.
Usage
from pinecone import Pinecone
pc = Pinecone(api_key='key')
index = pc.Index(
host="jen1024-dojoi3u.svc.apw5-4e34-81fa.pinecone.io",
pool_threads=25,
connection_pool_maxsize=25
)
Type of Change
[x] New feature (non-breaking change which adds functionality)
Test Plan
I ran some local performance tests and saw this does have an impact to performance.
Problem
To explore the impact on performance, I want to expose a configuration kwarg for
connection_pool_maxsize
onIndex
.Solution
This
connection_pool_maxsize
value is passed in tourllib3.PoolManager
asmaxsize
. This param controls how many connections are cached for a given host. If we are using a large number of threads to increase parallelism but this maxsize value is relatively small, we can end up taking unnecessary overhead to establish and discard connections beyond the maxsize that are being cached.By default
connection_pool_maxsize
is set tomultiprocessing.cpu_count() * 5
. In Google colab, cpu count is only 2 so this is fairly limiting.Usage
Type of Change
Test Plan
I ran some local performance tests and saw this does have an impact to performance.