liucongg / ChatGLM-Finetuning

基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等
2.63k stars 292 forks source link

RuntimeError: Default process group has not been initialized, please make sure to call init_process_group. #142

Open cqray1990 opened 3 months ago

cqray1990 commented 3 months ago
if args.local_rank == -1:
    device = torch.device("cuda")
else:
    torch.cuda.set_device(args.local_rank)
    device = torch.device("cuda", args.local_rank)
    deepspeed.init_distributed()
args.global_rank = torch.distributed.get_rank()

 此处代码是不是有问题?单独跑train.py 报错的,不用命令跑,不能单独调试代码,只能用distribute模式
ZzYAmbition commented 2 months ago

我用命令跑也是这个问题,请问解决了吗。