Open wenyisir opened 2 months ago
It will check the folder --output_dir /data/wyxu/LLaVA/checkpoints/llava-vicuna-7b-v1.3-finetune-on-mic_sampled-lora
for the latest checkpoint-xxxx
and resume to train.
As for the missmatch of state_dict
pip install transformers==4.39.3
pip install accelerate==0.27.2
It is mentioned in some issues, but i forgot which it is
might be this one #1200
I fixed this bug by modifying it: site-packages/deepspeed/runtime/engine.py line 2675 load_module_strict=Fasle
Great, so there is no need to change the version of transformers
, you could avoid potential troubles as in #1218 when infering
Describe the issue
Issue: When using LoRA fine-tuning to restore from a checkpoint, an error occurs, while there is no issue when not using LoRA fine-tuning to restore from a checkpoint. Can you explain why? How should I modify to save more parameters?![image](https://github.com/haotian-liu/LLaVA/assets/65152684/73baef59-feae-4b3e-8b64-ee49ba2395a2)
Command:
Log:
Screenshots:![image](https://github.com/haotian-liu/LLaVA/assets/65152684/d7924de6-b9be-46f9-b370-ba362ca78704)