Open AlanKoschel opened 2 years ago
Hi @AlanKoschel,
I think I suspect what is going on here. In the batch norm function, I use a squeeze
function to get rid of unnecessary dimensions. This means that if the input point cloud batch contains only one point, there is a bug as this dimension is squeezed too.
Could you print the dimension of your batch.points
tensors (for each layer). If it happens to be [1, 3] at any layer, then you have your culprit.
There could be a way to fix this squeeze function so there is no more error thrown (using reshape instead). But I don't think it should be corrected, as batch normalization is not supposed to be used on a single element. In your case, I suggest not using batch norm for your experiment, which if I understood is for debugging purpose anyway. See the parameter:
Hi @HuguesTHOMAS , thanks for your detailed explanation, I will check that soon!
Dear @HuguesTHOMAS , first of all, thank you very much for your implementaion of KPConv. I am using the network to train on colored point clouds, 3D reconstructed from drone images. The training, validation and testing works very well, but as soon as I am setting
batch_num=1
I encountered 2 errors:First one:
Second one during validation:
I am curious about what is happening there. In another Issue you mentioned that you recommend training only with
batch_num>=3
, so the only reason why I train with one batch is because I want to investigate it's learning behaviour. Training another network with 1 batch per iteration I encountered that the nework is learning nothing. So I wanted to see if KPConv exhibits the same and that it is due to the batch size.Thanks in advance!
Edit: Both errors occur random in different epochs and iterations each time.