Open SirRob1997 opened 2 years ago
There is a good reason for that, which is that the IndexShards object that distributes search/add to the GPU sub-indexes is a CPU index. However, since the search and add functions rely only on pointer arithmetic and don't access the data itself, it may be possible to relax this constraint. Let's talk about it with @wickedfoo
I see, it would make a lot of sense to relax this from a usage perspective. I guess, for now, the correct usage is to transfer the tensors to CPU and the IndexShards
object will move them to GPU again?
Summary
I'm using a sharded index (
IndexShards
) on multiple GPUs and want to search it using tensors that are already on the GPU.I've done the
import faiss.contrib.torch_utils
that should change the functions to be able to take Torch GPU tensors, this works in the single GPU setup but not for the sharded one. See below reproduction instructions.Stack trace:
When commenting in the parts for the single-gpu setup instead, everything works fine!
Platform
OS: Ubuntu 18.04.4 LTS
Faiss version: 1.7.0 // 1.7.1 // 1.7.1.post2
Installed from: pip
Running on:
Interface:
Reproduction instructions