Open JiamingLv opened 5 months ago
downgrade transformers can solve this
downgrade transformers can solve this @acdart hi , now my transformer is 4.41.1, Which version of Transformer should it be downgraded to?
How were you even able to run that command? It tells me that it doesn't recognize NPROC_PER_NODE=4, and if I run it without that bit (i.e. just running xtuner train llava_phi3_mini_4k_instruct_clip_vit_large_p14_336_e1_gpu8_pretrain --deepspeed deepspeed_zero2 --seed 1024), it says it doesn't know what llava_phi3_mini_4k_instruct_clip_vit_large_p14_336_e1_gpu8_pretrain is.
I strictly follow the document for phi3_mini_4k_instruct_clip_vit_large_p14_336. Run comand NPROC_PER_NODE=4 xtuner train llava_phi3_mini_4k_instruct_clip_vit_large_p14_336_e1_gpu8_pretrain --deepspeed deepspeed_zero2 --seed 1024
conda environment python==3.10 transformers==4.41.1 torch==2.3.0 CUDA 12.1 4x3090