Zhendong-Wang / Diffusion-GAN

Official PyTorch implementation for paper: Diffusion-GAN: Training GANs with Diffusion
MIT License
615 stars 65 forks source link

'tuple' object is not callable #25

Open malekiamir opened 1 year ago

malekiamir commented 1 year ago

Hi First I want to thank you for publishing this code. I tried to train a model as you described in this repository without any changes to the code, but I have this error.

Traceback (most recent call last): File "/content/Diffusion-GAN/diffusion-insgen/train.py", line 603, in main() # pylint: disable=no-value-for-parameter File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1130, in call return self.main(args, kwargs) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, ctx.params) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 760, in invoke return __callback(args, *kwargs) File "/usr/local/lib/python3.9/dist-packages/click/decorators.py", line 26, in new_func return f(get_current_context(), args, kwargs) File "/content/Diffusion-GAN/diffusion-insgen/train.py", line 596, in main subprocess_fn(rank=0, args=args, temp_dir=temp_dir) File "/content/Diffusion-GAN/diffusion-insgen/train.py", line 422, in subprocess_fn training_loop.training_loop(rank=rank, args) File "/content/Diffusion-GAN/diffusion-insgen/training/training_loop.py", line 351, in training_loop loss.accumulate_gradients(phase=phase.name, real_img=real_img, real_c=real_c, gen_z=gen_z, gen_c=gen_c, sync=sync, gain=gain, cl_phases=cl_phases, D_ema=D_ema, g_fake_cl=not no_cl_on_g, cl_loss_weight) File "/content/Diffusion-GAN/diffusion-insgen/training/contrastive_loss.py", line 107, in accumulate_gradients loss_Gmain.mean().mul(gain).backward() File "/usr/local/lib/python3.9/dist-packages/torch/_tensor.py", line 487, in backward torch.autograd.backward( File "/usr/local/lib/python3.9/dist-packages/torch/autograd/init.py", line 200, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/usr/local/lib/python3.9/dist-packages/torch/autograd/function.py", line 274, in apply return user_fn(self, args) File "/content/Diffusion-GAN/diffusion-insgen/torch_utils/ops/grid_sample_gradfix.py", line 50, in backward grad_input, grad_grid = _GridSample2dBackward.apply(grad_output, input, grid) File "/usr/local/lib/python3.9/dist-packages/torch/autograd/function.py", line 506, in apply return super().apply(args, kwargs) # type: ignore[misc] File "/content/Diffusion-GAN/diffusion-insgen/torch_utils/ops/grid_sample_gradfix.py", line 59, in forward grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False) TypeError: 'tuple' object is not callable

I didn't made any changes to the code and I run the code on google colab with the exact arguments that is written in this repository. Thanks for your help in advance

Zhendong-Wang commented 1 year ago

Hi, thanks for you interest in our work. If you are running Diffusion InsGen, you should use this https://github.com/Zhendong-Wang/Diffusion-GAN/blob/main/diffusion-stylegan2/environment.yml to build your environment (Pytorch 1.8.1 and Python 3.8), which should be similar to StyGAN2-ADA env (Looks like you used the one from https://github.com/Zhendong-Wang/Diffusion-GAN/blob/main/diffusion-stylegan2/environment.yml. )