The above runs fine in LLaVA repo (https://github.com/haotian-liu/LLaVA/tree/main). When I run it in this LLaVA-NeXT repo (slightly modified few lines of code in train.py to include llava model), the training runs but it keeps showing 'loss': 0.0 and same 'grad_norm". Any idea?
I am training task lora on "liuhaotian/llava-v1.5-13b" by following the same code in LLaVA repo: https://github.com/haotian-liu/LLaVA/blob/main/scripts/v1_5/finetune_task_lora.sh
The above runs fine in LLaVA repo (https://github.com/haotian-liu/LLaVA/tree/main). When I run it in this LLaVA-NeXT repo (slightly modified few lines of code in train.py to include llava model), the training runs but it keeps showing 'loss': 0.0 and same 'grad_norm". Any idea?