-
Hi, May I know MiniLM-L12 can work with this?
-
Documented here: https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#embeddings
Example:
```python
import llama_cpp
model = llama_cpp.Llama(model_path="all-MiniLM-L6-v2.e4ce9877.q8_…
-
This is for the **langchain4j-embeddings** library
**Describe the bug**
I am trying to utilize langchain4j inside an Apache Pulsar Function, which starts a Java class from its own class; `org.ap…
-
### What is the issue?
Sonatype nexus proxy was configured and worked, but 2 weeks ago it started giving an error when requesting a manifest.
```
2024-06-24 22:19:00,375+0300 DEBUG [nexus-htt…
-
Currently, many of the fields in model metadata are not filled.
These should either be specified manually or we should find ways to derive them (from HuggingFace Hub for example.)
-
### Before submitting your bug report
- [ ] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [ ] I'm not able to find an [open issue]…
-
I tried running streamlit_app.py.
In the code-
```python
# generate embeddings for each chunk
embeddings = HuggingFaceEmbeddings(
model_name="sentence-transformers/all-MiniLM-L6-…
-
[ ] I checked the [documentation](https://docs.ragas.io/) and related resources and couldn't find an answer to my question.
** Facing error with using Langchain wrapped hugging face models**
I am …
-
Here is my candle implementation: (Taken from the examples itself)
`pub fn encode(&self, prompt: &str) -> Result {
let tokens = self.tokenizer
.encode(prompt, true)
…
-
Hi everyone,
Fist of all, thanks to PrithivirajDamodaran for your great works !
My question is: how can I use the model cross-encoder/ms-marco-MiniLM-L-6-v2 because it's performance is almost as goo…