Open LaBaZh opened 4 months ago
I'm trying to start with https://github.com/haotian-liu/LLaVA/blob/main/llava/train/train.py , and try to figure out the template/conversation/multiimage processing.
I'm trying to start with https://github.com/haotian-liu/LLaVA/blob/main/llava/train/train.py , and try to figure out the template/conversation/multiimage processing.
Sounds good! Maybe you can refer to open-llava-next repo for training code.
check this issuse: https://github.com/LLaVA-VL/LLaVA-NeXT/issues/79#issuecomment-2212369132
I guess the finetunning code already integrated by transformers.
Hi, I am trying to use the LLaVA 1.5 training code to finetune LLaVA-Next. However, I encounter an issue where the training process gets blocked when using multiple GPUs, and there are no error messages to help diagnose the problem.
Here are some details: I can train successfully with multiple GPUs on LLaVA 1.5. I can train successfully with a single GPU on LLaVA-Next and obtain the expected results.
Any tips for resolving this issue with multi-GPU training on LLaVA-Next?
I've identified the issue. It stems from changing the batch size during training. This is our own problem. Anyway, it works by adapting the training code from llava1.5, even though there are some revision to do.
I've identified the issue. It stems from changing the batch size during training. This is our own problem. Anyway, it works by adapting the training code from llava1.5, even though there are some revision to do.
@JinhuiYE Hi there, I also want to train/finetune llava-next on my own dataset. Could you share the training code or some useful links?
I've identified the issue. It stems from changing the batch size during training. This is our own problem. Anyway, it works by adapting the training code from llava1.5, even though there are some revision to do.
@JinhuiYE Hi there, I also want to train/finetune llava-next on my own dataset. Could you share the training code or some useful links?
I reproduced a version of training code for llava-1.6 which enables video data training based on the open-llava-next repo, feel free to check.
@LaBaZh I would love to see that as well. Where can I see that? Thanks in advance.
@LaBaZh I would love to see that as well. Where can I see that? Thanks in advance.
Check My repo open-longva
as the title tells. Are there any specific plan for releasing the training code?