cc-ai / climategan

Code and pre-trained model for the algorithm generating visualisations of 3 climate change related events: floods, wildfires and smog.
https://thisclimatedoesnotexist.com
GNU General Public License v3.0
72 stars 18 forks source link

Data loading: RuntimeError: It is expected output_size equals to 3, but got size 2 #134

Closed vict0rsch closed 3 years ago

vict0rsch commented 3 years ago
Creating display images...

  36   │ Traceback (most recent call last):
  37   │   File "train.py", line 129, in <module>
  38   │     main()
  39   │   File "/home/mila/s/schmidtv/.conda/envs/omnienv/lib/python3.8/site-packages/hydra/main.py", line 20, in decorated_main
  40   │     run_hydra(
  41   │   File "/home/mila/s/schmidtv/.conda/envs/omnienv/lib/python3.8/site-packages/hydra/_internal/utils.py", line 171, in run_hydra
  42   │     hydra.run(
  43   │   File "/home/mila/s/schmidtv/.conda/envs/omnienv/lib/python3.8/site-packages/hydra/_internal/hydra.py", line 82, in run
  44   │     return run_job(
  45   │   File "/home/mila/s/schmidtv/.conda/envs/omnienv/lib/python3.8/site-packages/hydra/plugins/common/utils.py", line 109, in run_job
  46   │     ret.return_value = task_function(task_cfg)
  47   │   File "train.py", line 117, in main
  48   │     trainer.setup()
  49   │   File "/home/mila/s/schmidtv/ccai/github/omnigan/omnigan/trainer.py", line 260, in setup
  50   │     self.display_images[mode][domain] = [
  51   │   File "/home/mila/s/schmidtv/ccai/github/omnigan/omnigan/trainer.py", line 261, in <listcomp>
  52   │     Dict(self.loaders[mode][domain].dataset[i])
  53   │   File "/home/mila/s/schmidtv/ccai/github/omnigan/omnigan/data.py", line 344, in __getitem__
  54   │     "data": self.transform(
  55   │   File "/home/mila/s/schmidtv/.conda/envs/omnienv/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 61, in __call__
  56   │     img = t(img)
  57   │   File "/home/mila/s/schmidtv/ccai/github/omnigan/omnigan/transforms.py", line 32, in __call__
  58   │     return {
  59   │   File "/home/mila/s/schmidtv/ccai/github/omnigan/omnigan/transforms.py", line 33, in <dictcomp>
  60   │     task: F.interpolate(tensor, (self.h, self.w), mode=interpolation(task))
  61   │   File "/home/mila/s/schmidtv/.conda/envs/omnienv/lib/python3.8/site-packages/torch/nn/functional.py", line 3145, in interpolate
  62   │     return torch._C._nn.upsample_nearest3d(input, output_size, sfl[0], sfl[1], sfl[2])
  63   │ RuntimeError: It is expected output_size equals to 3, but got size 2
vict0rsch commented 3 years ago

This is due to task d

vict0rsch commented 3 years ago

weird shape torch.Size([1, 1, 1000, 1000, 4])

same error:

import torch.nn.functional as F
import torch

F.interpolate(torch.randn(1, 1, 1000, 1000, 4), size=(100, 100), mode="nearest")  
RuntimeError: It is expected output_size equals to 3, but got size 2
vict0rsch commented 3 years ago

Why do we have this input shape?

melisandeteng commented 3 years ago

Why do we have this input shape?

I was going to ask the same question...

melisandeteng commented 3 years ago

ok i think I know what is happening. What did you use for the depth in WD ? If you used megadepth prediction, then that's where the problem comes from. Right now, all simulated depth data is read as though it was coming from Unity simulator as 3 channel images.

vict0rsch commented 3 years ago

Ok, it's a shame we don't have WD depth as it's what brought @tianyu-z 's best performance (on beheaded omnigan :p )

tianyu-z commented 3 years ago

@vict0rsch Sorry for the confusion, in the #12 experiment I didn't include the WD data. Just to make everything clear, when you open the link: here , you will see two parts of the form. The experiments under opt.lr were not trained by the WD data. Those experiments under decoder were trained by WD data. image