TanGentleman / Augmenta

Automate RAG-powered workflows
MIT License
1 stars 0 forks source link

Add non-Ollama embedding support #24

Open TanGentleman opened 6 months ago

TanGentleman commented 6 months ago

I want to be able to use LM Studio for both the LLM and the Embeddings. If I understand it correctly, then this would affect how I handle the vectorstore/retriever. I'd have to create a lower level embeddings implementation, which means I can still create a vectorstore from documents, but I'd have to pass the embeddings in directly when performing a similarity search. Seems like this is how it might work:

  1. Create vectorstore without an embedding function (is this supported?)
  2. Pass user query string to LMStudio Embeddings API. This should return a List[float]
  3. Perform a similarity search different from the current retriever implementation, since the embeddings have to be passed directly.

Here's a snippet from the vectorstore code in langchain_core. This is likely what would work best for such tasks.

vectorstore.similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any
    ) -> List[Document]:
TanGentleman commented 6 months ago

Still having issues with using the LMStudio Embedding API in the same retriever. Going to deprioritize this since it'll likely evolve after it has an official release.