-
I have the same problem.
when i use pipe to inference with batch_size=1, everything is ok. However the error occur when infer with batch_size>1.
transformers: 4.44.0
torch: 2.1.2
model: whispe…
-
Hi
I discovered this project and is pretty amazing the results provided, I saw that it got updated to get support for the Japanese language, and that gave me the curiosity of how many epochs or hou…
-
models:
- model: stabilityai/stable-diffusion-xl-base-1.0
parameters:
weight: 1.0
lora:
- path: ehristoforu/dalle-3-xl-v2
- model: stabilityai/stable-diffusion-xl-base-1.…
-
**What would you like to be added/modified**:
A benchmark suite for multimodal large language models deployed at the edge using KubeEdge-Ianvs:
1. Modify and adapt the existing edge-cloud data c…
-
- [ ] [vidore/colpali · Hugging Face](https://huggingface.co/vidore/colpali)
# ColPali: Visual Retriever based on PaliGemma-3B with ColBERT strategy
## Model Description
This model is built iterati…
-
I have down model weights in my computer,but i don't how to use LOCAL LLMS and Embeddings for ragas according to
Here is My code but it did't work
```
import typing as t
import asyncio
from ty…
-
I have no idea what I am missing. I used git clone on [this llava repository
](https://huggingface.co/liuhaotian/llava-v1.5-13b) and changed the path in the `CKPT_PTH.py`.
Help would be appreciate…
-
Hello,
Based on your code, I added Korean tokens (using a Korean emotional dataset) to the tokenizer and fine-tuned the model with the LibriTTS R dataset. The Korean dataset is slightly less than 3…
-
I just follw the step, but when I run the following code :
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("Efficient-Large-Model/Llama-3-VILA1.5-8B")
…
-
**What would you like to be added/modified**:
A benchmark suite for large language models deployed at the edge using KubeEdge-Ianvs:
1. Interface Design and Usage Guidelines Document;
2. Implem…