vacancy / Synchronized-BatchNorm-PyTorch

Synchronized Batch Normalization implementation in PyTorch.
MIT License
1.5k stars 189 forks source link

Wired things, module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu #37

Closed kehuantiantang closed 4 years ago

kehuantiantang commented 4 years ago

I use pytorch 1.2.0, I first init the model, then use model.cuda() to put the model into gpu, then, I call model = nn.DataParallel(model) model = convert_model(model) train the model, give me such error, could you give me some information to avoid this ? thank you

vacancy commented 4 years ago

I think you should call model.cuda() after your model = convert_model(model)