Open amwork2020 opened 1 year ago
在3090 24G内存运行chatglm2啥也没改报 如下错误: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument target in method wrapper_CUDA_nll_loss_forward)
请务必查看我的readme.md里面的07-17日里面写的【注意】⚠️
readme.md
07-17
如果代码也该好了,依然有这个问题。你再检查一下transformers的版本,建议安装最新版本pip install transformers
pip install transformers
@yuanzhoulvpi2017 ok了,谢谢!
同谢,已解决报错
改了代码,同时也升级了transformers,为啥还是出现这个问题呢
在3090 24G内存运行chatglm2啥也没改报 如下错误: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument target in method wrapper_CUDA_nll_loss_forward)