Closed AItechnology closed 1 year ago
$ export CUDA_VISIBLE_DEVICES=0
will fix that. Feel free to implement a data parallel pipeline if you'd like to use multiple GPUs to train the model.
Thanks very much, I can finetune the model, I just want to use multi GPUs for finetuning a large model, I had tried some methods but failed, so will you provide a multi-GPU finetune code for the data parallel pipeline?
When I execute finetune code, it gets such a bug RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1