Closed zhangchenqi123 closed 9 months ago
Hi @zhangchenqi123 , thank you for your interest in TorchSparse!
The tuner is implemented in this file. It runs the sparse convolution model for several times to decide the backend configuration of sparse convolution kernels, including different dataflows and different kernel parameters.
In example.py
, we didn't include the tuner for the sake of simplicity. Please refer to our document for the usage.
You can also see the example code of auto-tuner in our artifact benchmark code. It is in the #L180 of artifact-p2/evaluation/evaluation.py
.
Thanks a lot for your kindly reply! I will try the tuner later.
I am currently facing uncertainty about the usage of the "tunner" in the torchsparse codebase. Although I observed the import statement "from .utils.tune import tune" in the
\torchsparse_init_.py
file, I am unable to locate where the "tunner" is actually employed in the code. The mechanism through which it operates remains unclear to me.Additionally, while exploring the
\torchsparse\examples\example.py
file, it seems that all instances ofspnn.Conv3d
ultimately utilize the "ImplicitGEMM" dataflow by default. And the Auto Tuner did not take effect to modify the dataflow for convolutional layers as mentioned in the torchsparse++ paper.Could you kindly provide clarification or guidance on the aforementioned concerns? Understanding the utilization of "tunner" and the specifics of how "ImplicitGEMM" functions within the Conv3d instances would greatly assist me in comprehending the codebase better.
Thank you for your assistance.