mit-han-lab / torchsparse

[MICRO'23, MLSys'22] TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.
https://torchsparse.mit.edu
MIT License
1.15k stars 131 forks source link

Does torchsparse support pooling blocks? #263

Open Tortoise0Knight opened 7 months ago

Tortoise0Knight commented 7 months ago

e.g. The implementation in MinkowskiEngine: https://nvidia.github.io/MinkowskiEngine/pooling.html#minkowskimaxpooling. I only found global_max_pool()

ys-2020 commented 7 months ago

Hi. We haven't implemented those pooling kernels yet. We will consider to implement them. Thank you for reaching out!

YilmazKadir commented 3 months ago

I also need average pooling for my application case and would appreciate it if you could implement this. Also, I would be happy if you could suggest a way to implement average pooling with convolutions. I thought using convolutions with all kernel elements = 1/N but N needs to be the number of active voxels inside the receptive field and I do not know how I can get that number.

kabouzeid commented 3 months ago

Yes, that would be extremely useful. I was in the process of migrating my code for Minkowski, but sadly the lack of pooling layers make this impossible now.

@zhijian-liu @kentang-mit