Open ht0rohit opened 9 months ago
As far as I can see, the only part that requires the GPU in chromadb is exclusively your embedding function. So this is where GPU support needs to be enabled. This can be done quite easily, here for the SentenceTransformerEmbeddingFunction...
emb_func = embedding_functions.SentenceTransformerEmbeddingFunction(model_name=model_name, device="cuda", normalize_embeddings=True)
@ht0rohit, what embedding function are you using? If it is sentence transformers, then @Halvani will work; for other EFs, you may need to do it a bit differently.
@tazarov how please
from langchain_community.embeddings.fastembed import FastEmbedEmbeddings device = torch.device("cuda" if torch.cuda.is_available() else "cpu") embeddings = FastEmbedEmbeddings(device=device) vectorstore = Chroma(persist_directory=persist_directory ,embedding_function=embeddings)
i had the best results with this... hope this helps...
Can I use the GPU now? huggingface embeddings, for example
Describe the problem
I want to load a large set of documents and save them to the persistent chroma client, but since the document are so many it takes a lot of time to generate embeddings and save them. Can I utilize GPUs to make this process faster?
Describe the proposed solution
GPU support when adding documents to the collection.
Alternatives considered
No response
Importance
would make my life easier
Additional Information
No response