tamarott / SinGAN

Official pytorch implementation of the paper: "SinGAN: Learning a Generative Model from a Single Natural Image"
https://tamarott.github.io/SinGAN.htm
Other
3.29k stars 608 forks source link

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [3, 32, 3, 3]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). #137

Closed brunzibeer closed 3 years ago

brunzibeer commented 3 years ago

As title, here's the full log of what happens when I try to run: python main_train.py --not_cuda --input_name cows.png

Random Seed: 6568 GeneratorConcatSkip2CleanAdd( (head): ConvBlock( (conv): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1)) (norm): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (LeakyRelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (body): Sequential( (block1): ConvBlock( (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1)) (norm): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (LeakyRelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (block2): ConvBlock( (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1)) (norm): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (LeakyRelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (block3): ConvBlock( (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1)) (norm): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (LeakyRelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (tail): Sequential( (0): Conv2d(32, 3, kernel_size=(3, 3), stride=(1, 1)) (1): Tanh() ) ) WDiscriminator( (head): ConvBlock( (conv): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1)) (norm): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (LeakyRelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (body): Sequential( (block1): ConvBlock( (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1)) (norm): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (LeakyRelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (block2): ConvBlock( (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1)) (norm): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (LeakyRelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (block3): ConvBlock( (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1)) (norm): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (LeakyRelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (tail): Conv2d(32, 1, kernel_size=(3, 3), stride=(1, 1)) ) Traceback (most recent call last): File "main_train.py", line 29, in train(opt, Gs, Zs, reals, NoiseAmp) File "/Users/mattiabernardi/Documents/MLDL Laboratory/Papers/SinGAN/SinGAN/training.py", line 39, in train z_curr,in_s,G_curr = train_single_scale(D_curr,G_curr,reals,Gs,Zs,in_s,NoiseAmp,opt) File "/Users/mattiabernardi/Documents/MLDL Laboratory/Papers/SinGAN/SinGAN/training.py", line 178, in train_single_scale errG.backward(retain_graph=True) File "/opt/anaconda3/envs/myenv/lib/python3.6/site-packages/torch/tensor.py", line 221, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/opt/anaconda3/envs/myenv/lib/python3.6/site-packages/torch/autograd/init.py", line 132, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [3, 32, 3, 3]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

venkycode commented 3 years ago

Encountered same error. Fortunately it has already been discussed in one of the closed issues. https://github.com/tamarott/SinGAN/issues/108#issuecomment-671640792

brunzibeer commented 3 years ago

Encountered same error. Fortunately it has already been discussed in one of the closed issues. #108 (comment)

Thank you! Apparently I'm blind as I looked if the same issue was encountered