NVlabs / SPADE

Semantic Image Synthesis with SPADE
https://nvlabs.github.io/SPADE/
Other
7.61k stars 980 forks source link

IndexError with custom dataset #163

Open emoebel opened 2 years ago

emoebel commented 2 years ago

Hello, I'm trying to run SPADE with a custom dataset, but when trying to run:

python train.py --name test --dataset_mode custom --label_dir datasets/increased/B/ --image_dir datasets/increased/A/ --label_nc 3

I get the error:

dataset [CustomDataset] of size 357 was created
Network [SPADEGenerator] was created. Total number of parameters: 92.1 million. To see the architecture, do print(network).
Network [MultiscaleDiscriminator] was created. Total number of parameters: 5.5 million. To see the architecture, do print(network).
create web directory ./checkpoints/increased_monoclass/web...
/net/serpico-fs2/emoebel/venv/spade/lib/python3.7/site-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
  "Argument interpolation should be of type InterpolationMode instead of int. "
Traceback (most recent call last):
  File "train.py", line 34, in <module>
    for i, data_i in enumerate(dataloader, start=iter_counter.epoch_iter):
  File "/net/serpico-fs2/emoebel/venv/spade/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
    data = self._next_data()
  File "/net/serpico-fs2/emoebel/venv/spade/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "/net/serpico-fs2/emoebel/venv/spade/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/net/serpico-fs2/emoebel/venv/spade/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/net/serpico-fs2/emoebel/increased/semantic_img_synthesis/SPADE/data/pix2pix_dataset.py", line 81, in __getitem__
    instance_path = self.instance_paths[index]
IndexError: list index out of range

My dataset is organised as follows (like for pix2pix from pytorch-CycleGAN-and-pix2pix):

A
|-train
|-val
B
|-train
|-val

any idea why I get an error?

agana99 commented 2 years ago

Hi, I'm facing the same issue for test.py when I run with my own inputs from a custom dataset folder. Did you manage to fix it?

Also wondering if your val_img folder is empty. Cus mine only throws me this error when its empty but I can't populate val_img with images as the input is from the user?

emoebel commented 2 years ago

In the end I made it work with: python train.py --name canalB --dataset_mode custom --label_dir datasets/increased/canalB/B/ --image_dir datasets/increased/canalB/A/ --instance_dir datasets/increased/canalB/C/ --label_nc 2

If you don't use --instance_dir you have to specify --no_instance, this solved the issue on my side. Now I am facing problems with test.py... The instructions given on the main github page do not work.

Please, dear author of this github, could you make a tutorial on how to use your code with a custom dataset ? That would be very helpful