Closed lxtGH closed 5 years ago
This means that you only allow for batchsize=1 for training ? @bermanmaxim
I think this means you need to split your batch first into individual images before sending it to the pooling layer. @Mathijssch do you confirm?
however when training in segmentation task, batch size is a key factor for performance, set batchsize=1 is not a correct choice.
Yes, @bermanmaxim's suggestion is correct, that would be the only option in this implementation, which is indeed a bit limiting. This limitation is not present in the Pytorch version, though.
wontfix
as we do not want to invest time in this project right now, PR welcome however! Thanks for your understanding.
I found that your code didn't support bachtchsize>1 training ,in the kernel function spx_max_pooling_backward_kernel, you didn't allocate block dimension for batch