Closed l1997i closed 2 years ago
It's not possible with the current state of things. I couldn't run with a batch size higher than 1 due to the 12GB gpu memory limitation that I had, so I didn't really consider higher batch sizes. I would suggest writing a new collate function for the dataloader to pad the point clouds to a common number of points. That should allow for higher batch sizes. You can also refer to the Cylinder3D codebase to see how the collate function can be implemented without padding.
Thanks so much for your excellent work and code. I've a question about the dataloader code. Is it possible to set the batch size bigger than 1 (e.g., 4, 8 or 16). When I tried to set a bigger
batch_size
intraining.yaml
, the code went wrong with the error "RuntimeError: stack expects each tensor to be equal size, but got [124266, 3] at entry 0 and [112695, 3] at entry 1".Thanks in advance!