Closed anguy044 closed 1 week ago
Hello,
Yes current setup doesn't hold support for using Gemini API's. However I think we can run Gemini or Vertex Embeddings via OpenAI Proxy Compatible API. I'm also looking out for this solution. Kindly share if have run the workaround successfully.
https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/call-vertex-using-openai-library
The authors had replied to a similar question in one of previous thread #175 .
You can have a look at this.
I don't believe there's a built in function in lightRAG current for using Google Vertex Embeddings models from looking at the source files (llm.py), correct me if I'm wrong or missed it.
I'm curious if this was something will be available in future updates?
Thank you.