mit-han-lab / torchsparse

[MICRO'23, MLSys'22] TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.
https://torchsparse.mit.edu
MIT License
1.19k stars 139 forks source link

[BUG] GlobalAvgPool's output has wrong batch size #217

Closed fishbotics closed 1 year ago

fishbotics commented 1 year ago

Is there an existing issue for this?

Current Behavior

In version 1.4, the GlobalAvgPool squished the sparse tensor into a dense tensor with the correct batch size. In version 2.1, TorchSparse changed the dimension used for the batch number (according to the docs) but it seems that this wasn't fixed for GlobalAvgPool

Here is a test to verify

def test():
    feats = torch.rand((100, 256))
    batch_ids = torch.randint(0, 10, (100, 1))
    coord_vals = torch.randint(2, 5, (100, 3))
    coords = torch.cat((batch_ids, coord_vals), dim=1)
    st = SparseTensor(coords=coords, feats=feats)
    out = spnn.GlobalAvgPool()(st)
    assert out.size(0) == len(
        torch.unique(batch_ids)
    ), f"{out.size(0)} vs {len(torch.unique(batch_ids))}"

Expected Behavior

I would expect the test above to pass (i.e. the batch size of the output to match the batch size of the sparse tensor)

Environment

- GCC: 11.3
- NVCC: 11.7, V11.7.99
- PyTorch: 2.0.1+cu117
- PyTorch CUDA: 11.7
- TorchSparse: 2.1.0+torch20cu117

Anything else?

No response

ys-2020 commented 1 year ago

Hi @fishbotics, thanks a lot for bringing the bug to our attention. We greatly appreciate your feedback! We will rebuild the wheel files to solve the problem. Please stay tuned for our updates. Thank you!

ys-2020 commented 1 year ago

Hi @fishbotics! We have updated the wheel files on our pypi server to fix the bug. Please reinstall torchsparse v2.1.0 to see if the problem has been solved. If there is still an error, please keep me informed. Thanks!