mit-han-lab / torchsparse

[MICRO'23, MLSys'22] TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.
https://torchsparse.mit.edu
MIT License
1.23k stars 143 forks source link

关于sparse的计算量 #277

Closed nancyhxn closed 10 months ago

nancyhxn commented 11 months ago

请问将sparse运用在自己的算法中,该如何计算sparse所占用的计算量?我目前的发现是直接调用“from thop import profile”的profile函数,不能计算到sparse的计算量,会被直接忽略掉。 请问这个问题该如何解决?

zhijian-liu commented 11 months ago

Yes, you need to write a customized hook for sparse convolution.

ys-2020 commented 11 months ago

Hi @nancyhxn , I think the kmap for sparse convolution might be helpful. You can refer to the out_in_map to calculate the theoretically minimal flops for sparse convolution. You may also need the reduced_mask to compute the actual macs in TorchSparse v2.1.0. Please refer to our paper for more details.

ys-2020 commented 10 months ago

Close this issue due to inactivity. Please feel free to reopen it if you have any further questions.