luoxiaoliu / PFENet2Plus

5 stars 3 forks source link

question about train #1

Open 1yanzu opened 11 months ago

1yanzu commented 11 months ago

I'm sorry to disturb you. In your code, when batch_size=4, the input is [4,3,473,473], but after the forward pass, it becomes [1,3,473,473]. After going through the nsm processing, it becomes [1, 256, 1, 1]. At this point, batch_size=1, and when it goes through self.bn = nn.BatchNorm2d(out_c), it will raise an error: raise ValueError("Expected more than 1 value per channel when training, got input size {}".format(size)). Looking forward to your reply.

luoxiaoliu commented 11 months ago

Hello, thank you for your interest in our project. It is possible to configure a batch size of 4 when utilizing a single GPU; however, it is essential to ensure that the batch size for a single GPU exceeds 1.

1yanzu commented 11 months ago

Hello, thank you for your interest in our project. It is possible to configure a batch size of 4 when utilizing a single GPU; however, it is essential to ensure that the batch size for a single GPU exceeds 1.

Thanks for the reply, but I just followed the configuration in your code, single GPU and batch_size=4.

luoxiaoliu commented 11 months ago

I suggest you use this “CUDA_VISIBLE_DEVICES=” to specify the GPU to be used; I have encountered the issue you mentioned before