mit-han-lab / torchsparse

[MICRO'23, MLSys'22] TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.
https://torchsparse.mit.edu
MIT License
1.22k stars 143 forks source link

[BUG] <title> v2.1 do not support dilation? #302

Open Peeta586 opened 7 months ago

Peeta586 commented 7 months ago

Is there an existing issue for this?

Current Behavior

There is not the dilation as an argument passed into build_kernel_map function, Is it right?

kmap = F.build_kernel_map(
                coords,
                feats.shape[0],
                kernel_size,
                stride,
                padding,
                hashmap_keys,
                hashmap_vals,
                spatial_range,
                kmap_mode,
                dataflow,
                downsample_mode=config.downsample_mode,
                training=training,
                ifsort=config.ifsort,
                split_mask_num=config.split_mask_num,
                split_mask_num_bwd=config.split_mask_num_bwd,
            )

Expected Behavior

I want to know how does the dilation work?

Environment

- GCC:9.0
- NVCC: 11.7
- PyTorch:1.10.1
- PyTorch CUDA:11.7
- TorchSparse: v2.1

Anything else?

No response

zhijian-liu commented 6 months ago

@ys-2020, could you please take a look at this issue when you have time? Thanks!