mit-han-lab / torchsparse

[MICRO'23, MLSys'22] TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.
https://torchsparse.mit.edu
MIT License
1.22k stars 143 forks source link

Uncertainty Regarding the Usage of Tunner in torchsparse #286

Closed zhangchenqi123 closed 9 months ago

zhangchenqi123 commented 10 months ago

I am currently facing uncertainty about the usage of the "tunner" in the torchsparse codebase. Although I observed the import statement "from .utils.tune import tune" in the \torchsparse_init_.py file, I am unable to locate where the "tunner" is actually employed in the code. The mechanism through which it operates remains unclear to me.

Additionally, while exploring the \torchsparse\examples\example.py file, it seems that all instances of spnn.Conv3d ultimately utilize the "ImplicitGEMM" dataflow by default. And the Auto Tuner did not take effect to modify the dataflow for convolutional layers as mentioned in the torchsparse++ paper.

Could you kindly provide clarification or guidance on the aforementioned concerns? Understanding the utilization of "tunner" and the specifics of how "ImplicitGEMM" functions within the Conv3d instances would greatly assist me in comprehending the codebase better.

Thank you for your assistance.

ys-2020 commented 9 months ago

Hi @zhangchenqi123 , thank you for your interest in TorchSparse!

The tuner is implemented in this file. It runs the sparse convolution model for several times to decide the backend configuration of sparse convolution kernels, including different dataflows and different kernel parameters.

In example.py, we didn't include the tuner for the sake of simplicity. Please refer to our document for the usage.

You can also see the example code of auto-tuner in our artifact benchmark code. It is in the #L180 of artifact-p2/evaluation/evaluation.py.

zhangchenqi123 commented 9 months ago

Thanks a lot for your kindly reply! I will try the tuner later.