-
I have been fine-tuning the llava-llama3-8b-v1_1 model on my own dataset using the llava_llama3_8b_instruct_full_clip_vit_large_p14_336_lora_e1_gpu8_finetune_copy.py script. While the training phase p…
J0eky updated
3 months ago
-
![image](https://github.com/user-attachments/assets/934b8b35-c514-4b62-ae83-3eb83e3b13ff)
-
I followed your installation protocol:
```
git clone https://github.com/mbzuai-oryx/LLaVA-pp.git
cd LLaVA-pp
git submodule update --init --recursive
# Copy necessary files
cp LLaMA-3-V/train…
-
Hi, and first thank you for the superb plugin. It's just awesome!
Could you give please a little bit more documentation about the local llm configuration?
Specific, I mean what possible values there…
-
- description
i follow the turial for llama3 ft :[https://github.com/SmartFlowAI/Llama3-Tutorial/blob/main/docs/llava.md](https://github.com/SmartFlowAI/Llama3-Tutorial/blob/main/docs/llava.md)
i …
-
Hi! Thank you for the contribution with the dataset! Really cool stuff! I was wondering, are you planing to release the code you used to create the dataset?
-
I have never been able to correctly quantify awq for llava-llama3 in the official format of llava。
Can anyone help me?
-
### 📚 The doc issue
感谢你们的工作,我参考官方文档:https://github.com/InternLM/lmdeploy/blob/main/docs/zh_cn/multi_modal/vl_pipeline.md,能够跑通,代码如下
```
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "3"
from lm…
-
### feature
Could you please support Llama3 in Llava ?
-
I use GPT-4o is running ok.
But when I changed to the local model, I used some error message.
EXCEPTION: 'function' object has no attribute 'name'
![image](https://github.com/onuratakan/gpt-compute…