openai / improved-diffusion

Release for Improved Denoising Diffusion Probabilistic Models
MIT License
3.27k stars 487 forks source link

RuntimeError: a leaf Variable that requires grad is being used in an in-place operation #23

Open MeimShang opened 2 years ago

MeimShang commented 2 years ago

It's an error when I run image_train.py My torch version is 1.7.1+cu110 Traceback (most recent call last): File "scripts/image_train.py", line 83, in main() File "scripts/image_train.py", line 41, in main TrainLoop( File "/mnt/e/w2l/code/diffusion/ddim/improved-diffusion-main/improved_diffusion/train_util.py", line 78, in init self._load_and_sync_parameters() File "/mnt/e/w2l/code/diffusion/ddim/improved-diffusion-main/improved_diffusion/train_util.py", line 127, in _load_and_sync_parameters dist_util.sync_params(self.model.parameters()) File "/mnt/e/w2l/code/diffusion/ddim/improved-diffusion-main/improved_diffusion/dist_util.py", line 72, in sync_params dist.broadcast(p, 0) File "/home/mayme/anaconda3/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 868, in broadcast work.wait() RuntimeError: a leaf Variable that requires grad is being used in an in-place operation.

jinzeren commented 2 years ago

Same problem. Have you found the solution?

createvalues commented 7 months ago

I have same problem.if you work in windows,you can solve this problem by this method: modify sync_params function q1 i guess the problem is that windows donot supports distributed training.