-
### The model to consider.
LLaVA-NeXT-Video* (LlavaNextVideoForConditionalGeneration)
### The closest model vllm already supports.
Llava-Next (LlavaNextForConditionalGeneration)
### What's your …
-
### Issue:
When loading the model in 4bit, I am facing this error: RuntimeError: expected scalar type Float but found Half.
### Reproduce the code:
```
from llava.model.builder import load_pre…
-
### Question
Is there a way I can download the checkpoints of the pre-trained LLAVA, and then fine-tune the data on my custom data set? I don't have a GPU so any tips would be appreciated.
aa221 updated
3 months ago
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
### Describe the bug
I hava one A100 gpu card.
f…
-
https://github.com/OpenBMB/ChatDev
https://github.com/microsoft/autogen
https://github.com/geekan/MetaGPT
https://github.com/reworkd/AgentGPT
https://github.com/Link-AGI/AutoAgents
https://github…
-
There are multiple mentions of a multi modal sequence parallel system for inference which can be seamlessly integrated with HF transformers. However, I am not able to follow this through the codebase …
-
I redownload this repo,and tried `transfoemers` version:`4.40.0.dev`、`4.40.0`、`4.41.2`,the result is still `['']`.
some thing i do include:
All weight i use is local weight.below is my change.
1. `…
-
Hi, I am trying to finetune LLaVA-NeXT with my custom dataset, using "finetune_clip.sh" shell file.
I gave some edits to the shell for my convenience and to satisfy my task so far, like this:
```
…
-
I try to run llava-v1.6-34b-hf-awq and sucessed, but how can I run the test for Llava-v1.5 ConditionalGeneration?
https://github.com/casper-hansen/AutoAWQ/pull/250
The bug of example likely :
1. ma…
-
### Your current environment
```
Ray v2.23
Python 3.10
vllm 0.5.4
cuda 12.1
```
### 🐛 Describe the bug
We are attempting to utilize Ray v2.23 for batch inferencing, specifically on multi…