-
Hi team:
Every steps are copied your wizard to do
Like same dataset, and use the newest repo also upload llava to neweast
Also copy every new python files to Llava folder
But, still repo…
-
Hello after training Qlora I got produce checkpoint under
```
ll output/lora_vision_test/
adapter_config.json
adapter_model.safetensors
checkpoint-178/
config.json
non_lora_state_dict.bin
…
-
After start pretrain, there is a bug
Traceback (most recent call last):
File "/data2/LLaVA-pp/LLaVA/llava/train/train_mem.py", line 4, in
train(attn_implementation="flash_attention_2")
…
-
### Describe the issue
Issue:
In pretraining or finetuning, the training always stuck after the log "Formatting inputs...Skip in lazy mode". Everytime I need to force shutting down my GPU server b…
-
### Your current environment
I encountered a few issues while running phi-3-vision with the vllm built from current main branch.
1. Dependency:
`torchvision` is a dependency under [image_pro…
-
I am using an M1, on commit 928e0b70.
When I run
`./llava-cli -m ./models/llava-v1.6-mistral-7b/ggml-mistral-7b-q_5_k.gguf --mmproj ./models/llava-v1.6-mistral-7b/mmproj-mistral7b-f16-q6_k.gguf …
-
![QQ截图20240611082851](https://github.com/heshengtao/comfyui_LLM_party/assets/130342241/899689dc-9b08-4ca6-9ed9-7d1152917738)
-
I wonder whether there is a guideline on hosting customized LLaVA model. I have both mmprojector and base models gguf files. Feel free to point me any related materials/links.
Many thanks,
Rui
-
Hey folks,
I was thinking about training a small Phi-3 + OpenClip Llava style model (building off the MobileVLM work). Do you plan to support that sort of multi-modal model with Onnx (or ideally on…
-
### Prerequisites
- [X] I am running the latest code. Mention the version if possible as well.
- [X] I carefully followed the [README.md](https://github.com/Mozilla-Ocho/llamafile/blob/master/READ…