-
Your code example seems to imply that.
gvlx updated
4 months ago
-
/kind bug
**What steps did you take and what happened:**
I created an inference service with an embedding model, but I cannot access any endpoint.
**What did you expect to happen:**
I expect…
-
I'm encountering an error while trying to use the WebsiteSearchTool with Gemini Pro as the LLM. The error message is:
"Embedding dimension 768 does not match collection dimensionality 1536. This is…
-
Is it possible to use a custom embedding model?
-
Hi, given that now RKNN-LLM supports embedding, I tried with different models , but no one works with embeddings.
Can you point me out a model that supports embedding and is compatible with RKLLM?
…
-
## Description
Chunking is the process of breaking down large pieces of text into smaller chunks. For the purpose of this document chunking occurs at ingest for the use of embedding models. The reran…
-
Hey, I'm trying setting up a Colab demo with MCDSE but getting something obvious wrong: https://colab.research.google.com/drive/1aEgITiGDgKb3RSaKHcNjdHa67HVkibZc?usp=sharing
Would appreciate a seco…
-
### Bug Description
Any calls to async-await operations that are related to llama_index.embeddings.gemini is causing this error:
TypeError: object BatchEmbedContentsResponse can't be used in 'await'…
-
The original run.py saves the model in pytorch_model.bin, which cannot be loaded directly using the code provided in this repository. After replacing line 422 `trainer.save_model()` in training/run.py…
-
Awesome Azure OpenAI RAFT sample, thanks for sharing!
In notebook _**3_raft_evaluation.ipynb**_, section _**5. Computing the evaluation metrics for both models**_ I get multiples of the following …