-
Performance: With just 10B parameters, [Ovis1.6-Gemma2-9B](https://huggingface.co/AIDC-AI/Ovis1.6-Gemma2-9B) leads the OpenCompass benchmark among open-source MLLMs within 30B parameters.
![FBw_icZ…
-
It would be great to add this 4-bit quantized version of Ovis 1.6, to run on lower memory: [https://huggingface.co/ThetaCursed/Ovis1.6-Gemma2-9B-bnb-4bit](https://huggingface.co/ThetaCursed/Ovis1.6-Ge…
-
### Please make sure these conditions are met
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest version of anndata.
- [X] (optio…
-
### 🚀 The feature, motivation and pitch
Hi, I wanted to try the model AIDC-AI/Ovis1.6-Gemma2-9B but I´m getting the error "Model architectures ['Ovis'] are not supported for now". Is it planned for t…
-
Processing 1 images
image shape: (600, 300, 4) min: 23.00000 max: 255.00000 uint8
Traceback (most recent call last):
File "blur.py", line 371, in
video_path=…
-
Could anyone please advise if it is possible to run inference with OVIS 1.6 on a single 4090 GPU? After loading the model, it appears to consume approximately 20GB of VRAM. I attempted an inference, b…
-
Title.
https://x.com/mervenoyann/status/1831409380040044762
-
hey, while running on 4bit quantized model from https://huggingface.co/ThetaCursed/Ovis1.6-Gemma2-9B-bnb-4bit i am getting the following error
```
{
"name": "RuntimeError",
"message": "self an…
-
I want to use multiple GPUs for inference, and I use device_map='auto' to load the model. However, I always met that problem: Expected all tensors to be on the same device, but found at least two dev…
-