xwjabc / hed

A PyTorch reimplementation of Holistically-Nested Edge Detection
170 stars 42 forks source link

batch-size >1 possible for training? #12

Open aasharma90 opened 5 years ago

aasharma90 commented 5 years ago

Hi @xwjabc,

Thanks for your code!

Could you please tell me if it is possible to have a higher batch-size (>1) for training? I see that when I try this out, I get the following error - upsample2 = torch.nn.functional.conv_transpose2d(score_dsn2, self.weight_deconv2, stride=2) Expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight'

Just wondering if you know this already.

Thanks, AA

xwjabc commented 5 years ago

Hi! Currently, the code does not support batch size > 1 since the images have different sizes and PyTorch cannot support mini-batch with various sizes.

lychrel commented 4 years ago

Not sure if this is considered best practice, but if you really want that batch-size speedup you could just pad/resize all images to shared dimensions in the generator

BruceYu-Bit commented 2 years ago

Hi! Currently, the code does not support batch size > 1 since the images have different sizes and PyTorch cannot support mini-batch with various sizes.

hi i wonder if you use batch size =1 ,how long do u train and u use which epoch to test?

xwjabc commented 2 years ago

We train the HED model for 40 epochs and use the last epoch's checkpoint for evaluation. It takes ~27hrs with one NVIDIA Geforce GTX Titan X (Maxwell).