-
As far as I konw, the llava with llama3 8b newest version already get a very good result without RecapDataComp1B
Just wondering, how does the dataset contribute to the performance without changing …
-
Hello, I load pre-trained llava-llama3 SFT weights and fine-tune using LoRA, but get an error when merging weights:
**scripts:**
Training:
```
deepspeed --master_port=$((RANDOM + 10000)) --inclu…
-
![微信截图_20240713161048](https://github.com/user-attachments/assets/179a13fc-1dce-45d5-b803-69151cab8e56)
转换后的文件和说明中的不同
-
hope can use llama3.1 soon on ollama
-
支持自定义视觉编码器么(llava-llama3)?
例如将clip换成siglip?
该如何实现?哪些代码需要修改?
-
I tried the demo code and got an error:
```
from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token
from ll…
-
Hi! Thank you again for this repo. The fine-tuning with llama3 works. However, when I try to merge with the obtained LoRA weights, using the `merge_lora_weights.py` script, and I compare the weights b…
-
- cmd:
`xtuner chat LLM-Research/Meta-Llama-3-8B-Instruct \
--visual-encoder ./clip-vit-large-patch14-336 \
--llava ./LLM-Research/llava-llama-3-8b \
--prompt-template llama3_chat \
--ima…
-
Hello, I am trying to find the training code, but it seems like there is just inference code.
Can you please point to the training code?
-
### What happened?
We can't use Vision with `ollama_chat`, but it's working with `ollama`.
config.yaml
```yaml
- model_name: 'llava:7b'
litellm_params:
model: 'ollama_chat/llava:7b…