Float16 doesn't seem to be supported, however when trying to pass tensors of this type there are confusing assertion errors such as:
"AssertionError: kmeans is not trained" or "RuntimeError: CUDA error: an illegal memory access was encountered"
from torchpq.index import IVFPQIndex
import torch
n_data = 1000000 # number of data points
d_vector = 100 # dimentionality / number of features
index = IVFPQIndex(
d_vector=d_vector,
n_subvectors=20,
n_cells=1024,
initial_size=5000,
distance="euclidean",
)
Float16 doesn't seem to be supported, however when trying to pass tensors of this type there are confusing assertion errors such as: "AssertionError: kmeans is not trained" or "RuntimeError: CUDA error: an illegal memory access was encountered"
from torchpq.index import IVFPQIndex import torch
n_data = 1000000 # number of data points d_vector = 100 # dimentionality / number of features
index = IVFPQIndex( d_vector=d_vector, n_subvectors=20, n_cells=1024, initial_size=5000, distance="euclidean", )
trainset = torch.randn(d_vector, n_data, device="cuda:0", dtype=torch.float16) index.train(trainset)