jingyuanli001 / RFR-Inpainting

The source code for CVPR 2020 accepted paper "Recurrent Feature Reasoning for Image Inpainting"
MIT License
355 stars 76 forks source link

Issue while training: stack expects each tensor to be equal size #47

Open Akshay-Ijantkar opened 3 years ago

Akshay-Ijantkar commented 3 years ago

hello @blmoistawinde @jingyuanli001 training command:

python run.py \
--data_root ./train_images \
--mask_root ./train_masks \
--model_save_path ./output_weights/test_iter_600500.pth \
--result_save_path ./training_results/ \
--model_path ./pre_trained_weights/checkpoint_celeba.pth \
--target_size 224 \
--mask_mode 0 \
--batch_size 5 \
--gpu_id 0 \
--num_iters 600050 

getting this error:

  File "run.py", line 38, in <module>
    run()
  File "run.py", line 35, in run
    model.train(dataloader, args.model_save_path, args.finetune, args.num_iters)
  File "/media/ai/e1f5ec44-04e5-413d-b816-57a5173d06c528/ai/Anglo_American/ACL/image_inpainting_GAN/research/RFR-Inpainting/model.py", line 58, in train
    for items in train_loader:
  File "/home/ai/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
    data = self._next_data()
  File "/home/ai/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "/home/ai/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
    return self.collate_fn(data)
  File "/home/ai/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 79, in default_collate
    return [default_collate(samples) for samples in transposed]
  File "/home/ai/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 79, in <listcomp>
    return [default_collate(samples) for samples in transposed]
  File "/home/ai/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 55, in default_collate
    return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [3, 224, 224] at entry 0 and [4, 224, 224] at entry 3
blmoistawinde commented 3 years ago

The model by default uses 3-channel images (RGB), but it seems that you have images with 4 channels (like RGBA), may be try check and channel first and converting when necessary with some libraries like PIL / Pillow.

ziyan19833891 commented 2 months ago

hello @blmoistawinde @jingyuanli001 training command:

python run.py \
--data_root ./train_images \
--mask_root ./train_masks \
--model_save_path ./output_weights/test_iter_600500.pth \
--result_save_path ./training_results/ \
--model_path ./pre_trained_weights/checkpoint_celeba.pth \
--target_size 224 \
--mask_mode 0 \
--batch_size 5 \
--gpu_id 0 \
--num_iters 600050 

getting this error:

  File "run.py", line 38, in <module>
    run()
  File "run.py", line 35, in run
    model.train(dataloader, args.model_save_path, args.finetune, args.num_iters)
  File "/media/ai/e1f5ec44-04e5-413d-b816-57a5173d06c528/ai/Anglo_American/ACL/image_inpainting_GAN/research/RFR-Inpainting/model.py", line 58, in train
    for items in train_loader:
  File "/home/ai/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
    data = self._next_data()
  File "/home/ai/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "/home/ai/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
    return self.collate_fn(data)
  File "/home/ai/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 79, in default_collate
    return [default_collate(samples) for samples in transposed]
  File "/home/ai/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 79, in <listcomp>
    return [default_collate(samples) for samples in transposed]
  File "/home/ai/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 55, in default_collate
    return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [3, 224, 224] at entry 0 and [4, 224, 224] at entry 3

Has the problem been resolved? How was it resolved?