-
I want to understand on the use of Model Architecture difference between the author release of `lmms-lab` and the HF team releases on `llava-hf`. For the same set of Models does using one over the ano…
amew0 updated
2 weeks ago
-
I encountered an issue while running the vqav2_test task on the liuhaotian/llava-v1.5-7b model. The command was executed on a setup with 32 CPUs and 7 RTX A6000 GPUs, but it failed with a subprocess.C…
-
I'm evaluating the LLaVA-Lora version (https://huggingface.co/liuhaotian/llava-v1.5-7b-lora/discussions), but the performance seems unusually low. Do you know if this is supported in the lmms-eval pip…
-
OS: 22.04.1-Ubuntu
Python: Python 3.12.2
Build fails for llama-cpp-python
```
$ pip install -r requirements.txt
...
Building wheels for collected packages: llama-cpp-python
Building wheel…
-
```
from ipex_llm import optimize_model
from transformers import LlavaForConditionalGeneration
model = LlavaForConditionalGeneration.from_pretrained('llava-hf/llava-1.5-7b-hf', device_map="cpu")
m…
-
## 🚀 Model / language coverage
The idea is to support LLaVa model from HF. This issue is mainly for tracking the status.
Blocking issues:
- [ ] #735
- [ ] #124
### Minimal Repro
First o…
-
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch…
-
### Question
I want to pretrain the model, but I see that the evaluation_strategy in pretrain.sh is set to "no." How can I determine if the model is trained well?
-
## 🐛 Bug
I am trying to run llava with mlc-llm. On both a linux server machine and a local MacOS, I encountered this error:
(run `export RUST_BACKTRACE=full` before running the inference program…
-
https://github.com/LLaVA-VL/LLaVA-NeXT/blob/b3a46be22d5aa818fa1a23542ae3a28f8e2ed421/llava/model/llava_arch.py#L230
Not every model config has the attribute "add_faster_video" (e.g. https://hugging…