ykdai / BasicPBC

Official Implementation of "Learning Inclusion Matching for Animation Paint Bucket Colorization"
Other
233 stars 23 forks source link

How to stack batches? Meet error when training the model with a batch_size more than 2 #26

Closed riderexin closed 2 months ago

riderexin commented 2 months ago

To modify the Dataloader, I have changed the hyperparameters in "options/train/basicpbc_pbch_train_option.yml"

if num_worker>0 and batch_size>1, RuntimeError: Caught RuntimeError in DataLoader worker process 0. && RuntimeError: Trying to resize storage that is not resizable if num_worker = 0 and batch_size>1, RuntimeError: stack expects each tensor to be equal size, but got [261, 4] at entry 0 and [137, 4] at entry 1 if num_worker = 0 and batch_size=1, it is ok to train. But I think it's too slow.

After checking the dataset, I found that each sample in the dataset is a dictionary, which contains the value just has different shapes/sizes among different samples.

To be more specific, I print the key and the values' shape of two samples. I just wonder how to use dataloader with a batch size more than 1. below is one sample image below is another sample image

ykdai commented 2 months ago

Since the number of the segments are different, we do not support batch size larger than 1 in one card. However, you can still increase the batch size by using multiple GPUs.

riderexin commented 2 months ago

Since the number of the segments are different, we do not support batch size larger than 1 in one card. However, you can still increase the batch size by using multiple GPUs.

Many thanks!!!!!!