yuanzhoulvpi2017 / zero_nlp

中文nlp解决方案(大模型、数据、模型、训练、推理)
MIT License
2.85k stars 355 forks source link

多卡运行报错(Chatglm6b_ModelParallel_ptuning, 训练) #110

Closed hunwenpinghao closed 1 year ago

hunwenpinghao commented 1 year ago

多卡运行报错: Expected all tensors to be on the same device, but found at least two devices, cuda:4 and cuda:0!

device_map_dict = {'transformer.word_embeddings': 0,
                   'transformer.layers.0': 0,
                   'transformer.layers.1': 0,
                   'transformer.layers.2': 0,
                   'transformer.layers.3': 0,
                   'transformer.layers.4': 1,
                   'transformer.layers.5': 1,
                   'transformer.layers.6': 1,
                   'transformer.layers.7': 1,
                   'transformer.layers.8': 1,
                   'transformer.layers.9': 2,
                   'transformer.layers.10': 2,
                   'transformer.layers.11': 2,
                   'transformer.layers.12': 2,
                   'transformer.layers.13': 2,
                   'transformer.layers.14': 3,
                   'transformer.layers.15': 3,
                   'transformer.layers.16': 3,
                   'transformer.layers.17': 3,
                   'transformer.layers.18': 3,
                   'transformer.layers.19': 4,
                   'transformer.layers.20': 4,
                   'transformer.layers.21': 4,
                   'transformer.layers.22': 4,
                   'transformer.layers.23': 4,
                   'transformer.layers.24': 4,
                   'transformer.layers.25': 5,
                   'transformer.layers.26': 5,
                   'transformer.layers.27': 5,
                   'transformer.final_layernorm': 5,
                   'transformer.prefix_encoder': 5,
                   'lm_head': 5,
                   }

image

image

yuanzhoulvpi2017 commented 1 year ago

说清楚,是哪个工程下的(文件夹名称),在做训练还是在做推理的时候?

hunwenpinghao commented 1 year ago

说清楚,是哪个工程下的(文件夹名称),在做训练还是在做推理的时候?

不好意思没说清楚,我用的是Chatglm6b_ModelParallel_ptuning 这个文件夹,在训练的时候报错了,麻烦大佬看看有可能是什么问题呢?

yuanzhoulvpi2017 commented 1 year ago

你截图中的错误太少了,我没办法定位错误具体在哪里。

hunwenpinghao commented 1 year ago

你截图中的错误太少了,我没办法定位错误具体在哪里。

好的,谢谢,好像我找到解决方法了,我这么改是可以跑起来的: 先安装accelerate:pip install accelerate. image

        from accelerate import dispatch_model
        from utils import auto_configure_device_map
        device_map = auto_configure_device_map(6)
        model = dispatch_model(model, device_map=device_map)
def auto_configure_device_map(num_gpus: int) -> Dict[str, int]:
    # transformer.word_embeddings 占用1层
    # transformer.final_layernorm 和 lm_head 占用1层
    # transformer.layers 占用 28 层
    # 总共30层分配到num_gpus张卡上
    num_trans_layers = 28
    per_gpu_layers = 30 / num_gpus

    # bugfix: 在linux中调用torch.embedding传入的weight,input不在同一device上,导致RuntimeError
    # windows下 model.device 会被设置成 transformer.word_embeddings.device
    # linux下 model.device 会被设置成 lm_head.device
    # 在调用chat或者stream_chat时,input_ids会被放到model.device上
    # 如果transformer.word_embeddings.device和model.device不同,则会导致RuntimeError
    # 因此这里将transformer.word_embeddings,transformer.final_layernorm,lm_head都放到第一张卡上
    device_map = {'transformer.word_embeddings': 0,
                  'transformer.final_layernorm': 0, 
                  'transformer.prefix_encoder': 0,
                  'lm_head': 0}

    used = 2
    gpu_target = 0
    for i in range(num_trans_layers):
        if used >= per_gpu_layers:
            gpu_target += 1
            used = 0
        assert gpu_target < num_gpus
        device_map[f'transformer.layers.{i}'] = gpu_target
        used += 1

    return device_map

参考的这里:https://github.com/THUDM/ChatGLM-6B/blob/163f94e160f08751545e3722730f1832d73b92d1/utils.py#L8