Closed fishbotics closed 1 year ago
Hi @fishbotics, thanks a lot for bringing the bug to our attention. We greatly appreciate your feedback! We will rebuild the wheel files to solve the problem. Please stay tuned for our updates. Thank you!
Hi @fishbotics! We have updated the wheel files on our pypi server to fix the bug. Please reinstall torchsparse v2.1.0 to see if the problem has been solved. If there is still an error, please keep me informed. Thanks!
Is there an existing issue for this?
Current Behavior
In version 1.4, the GlobalAvgPool squished the sparse tensor into a dense tensor with the correct batch size. In version 2.1, TorchSparse changed the dimension used for the batch number (according to the docs) but it seems that this wasn't fixed for GlobalAvgPool
Here is a test to verify
Expected Behavior
I would expect the test above to pass (i.e. the batch size of the output to match the batch size of the sparse tensor)
Environment
Anything else?
No response