Open zhww opened 4 months ago
--version plain
MODEL_TYPE=llama3-8b, --version plain for pretrain,But also give same errors!
On our device, zero3 for pre-training is ok. Please share more information. Or you can try our configured docker.
Thank you, I test on 4090 before, when change to A100 zero3 for pretrain is ok.
By the way, will you support any resolution like slice the image, as S2 is only for multi-scale features.
I do pretrain with zero3 will got errors, but lora fintune with zero3 is ok. The error info is: python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3375, in reduce_scatter_tensor work = group._reduce_scatter_base(output, input, opts) torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1333, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.18.1
The pretain setting: MODEL_TYPE=llama3-8b --version llama
transformers==4.40.0 deepspeed==0.14.3 pytorch==2.1.2+cu121
Will you check the pretain code with zero3 be ok?