-
Hi, great research!
Is there a way to save the merged model somewhere and load the merged model the next time we want to use the model?
Right now, it is taking a long time to load and merge the …
-
I'm trying to instruction tune llava-next models following the llava_vsft.py examples shared for llava-1.5.
```
python vsft.py \
--dataset_name="HuggingFaceH4/llava-instruct-mix-vsft" \
--…
-
### Question
I downloaded llava-llama-2-13b from:
https://huggingface.co/liuhaotian/llava-llama-2-13b-chat-lightning-preview
Then I've quantized the model to 4-bit using .
```
git clone htt…
-
### Describe the issue
Issue: AttributeError: 'NoneType' object has no attribute 'skip_next'
Command:
python -m llava.serve.gradio_web_server --controller http://localhost:10000 --model-list-mode…
-
### The model to consider.
The llava-next-video project has already been released, and the test results are quite good. Are there any plans to support this project?
`https://github.com/LLaVA-VL/LLaV…
-
### System Info
- `transformers` version: 4.45.0.dev0
- Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- Huggingface_hub version: 0.24.6
- Safetensors version: 0…
-
### Describe the issue
Issue:
I have enable the m1 chip using the --device mps but still have the errors.
Command:
```
python3 -m llava.serve.cli \
--model-path liuhaotian/llava-v1.5-7b…
-
I would like to finetune llava-next based on this codebase. Are there any specific steps or potential pitfalls that I should be aware of during the integration process?
-
### System Info
transformers==4.43.3
When I use the case to infer video in [https://huggingface.co/llava-hf/LLaVA-NeXT-Video-7B-hf](https://huggingface.co/llava-hf/LLaVA-NeXT-Video-7B-hf)
It's …
-
I can infer with single gpu but fail while CUDA_VISIBLE_DEVICES=0,1
I have 8*A800 and try to run lava-onevision-qwen2-72b-ov model, here is a bug:
'''
LLaVA-NeXT/llava/model/llava_arch.py", li…