NVlabs / NVAE

The Official PyTorch Implementation of "NVAE: A Deep Hierarchical Variational Autoencoder" (NeurIPS 2020 spotlight paper)
https://arxiv.org/abs/2007.03898
Other
1.02k stars 164 forks source link

Fix issue with pytorch #21

Open ImanHosseini opened 3 years ago

ImanHosseini commented 3 years ago

Problem described in this issue: https://github.com/pytorch/pytorch/issues/46820 (this means NVAE don't work for at least pytorch==1.7 and beyond) I have tested it with pytorch==1.7, so this 1 change is all that is needed to add support for 1.7, which brings many improvements as compared to 1.6 (like windows support for distributed learning) I came across this when I was running the code with pytorch==1.7, getting this error message (and this change would fix the problem): """ /home/iman/projs/NVAE/distributions.py:31: UserWarning: Output 0 of SplitBackward is a view and is being modified inplace. This view is an output of a function that returns multiple views. Inplace operators on such views are being deprecated and will be forbidden starting from version 1.8. Consider using unsafe_ version of the function that produced this view or don't modify this view inplace. (Triggered internally at /pytorch/torch/csrc/autograd/variable.cpp:491.) self.mu = softclamp5(mu) /home/iman/projs/NVAE/distributions.py:32: UserWarning: Output 1 of SplitBackward is a view and is being modified inplace. This view is an output of a function that returns multiple views. Inplace operators on such views are being deprecated and will be forbidden starting from version 1.8. Consider using `unsafe` version of the function that produced this view or don't modify this view inplace. (Triggered internally at /pytorch/torch/csrc/autograd/variable.cpp:491.) log_sigma = soft_clamp5(log_sigma) Traceback (most recent call last): File "train.py", line 415, in init_processes(0, size, main, args) File "train.py", line 281, in init_processes fn(args) File "train.py", line 92, in main train_nelbo, global_step = train(train_queue, model, cnn_optimizer, grad_scalar, global_step, warmup_iters, writer, logging) File "train.py", line 164, in train logits, log_q, log_p, kl_all, kl_diag = model(x) File "/home/iman/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/iman/projs/NVAE/model.py", line 358, in forward dist = Normal(mu_q, log_sig_q) # for the first approx. posterior File "/home/iman/projs/NVAE/distributions.py", line 32, in init log_sigma = soft_clamp5(log_sigma) RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): File "/home/iman/projs/NVAE/distributions.py", line 19, in soft_clamp5

xx = 5.0*torch.tanh( x / 5.0)

# return  5.0*torch.tanh( x / 5.0)
return x.div_(5.).tanh_().mul(5.)    #  5. * torch.tanh(x / 5.) <--> soft differentiable clamp between [-5, 5]
       ~~~~~~ <--- HERE

RuntimeError: diff_view_meta->outputnr == 0 INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/autograd/variable.cpp":363, please report a bug to PyTorch. """