Closed riderexin closed 4 months ago
Since the number of the segments are different, we do not support batch size larger than 1 in one card. However, you can still increase the batch size by using multiple GPUs.
Since the number of the segments are different, we do not support batch size larger than 1 in one card. However, you can still increase the batch size by using multiple GPUs.
Many thanks!!!!!!
To modify the Dataloader, I have changed the hyperparameters in "options/train/basicpbc_pbch_train_option.yml"
if num_worker>0 and batch_size>1, RuntimeError: Caught RuntimeError in DataLoader worker process 0. && RuntimeError: Trying to resize storage that is not resizable if num_worker = 0 and batch_size>1, RuntimeError: stack expects each tensor to be equal size, but got [261, 4] at entry 0 and [137, 4] at entry 1 if num_worker = 0 and batch_size=1, it is ok to train. But I think it's too slow.
After checking the dataset, I found that each sample in the dataset is a dictionary, which contains the value just has different shapes/sizes among different samples.
To be more specific, I print the key and the values' shape of two samples. I just wonder how to use dataloader with a batch size more than 1. below is one sample below is another sample