Closed chaitjo closed 3 years ago
They are the same as the computation overhead introduced by the point-based is not that significant. The overhead can also be observed in this figure actually (there is a small shift in the x-axis between MinkowskiNet and SPVCNN).
Thanks for the response. I see, indeed, I can also see the small shift!
Based on Haotien's response here: #19, I got the impression that voxelization/devoxelization procedure in SPVCNN will have some impact on MACs and GPU latency when compared to MinkowskiNet at the same cr
? (B/c the sparse convolution branches are exactly the same in both nets.) Am I correct in my understanding that the penalty for voxelization/devoxelization procedure actually does not penalize the model's inference time much, esp. at small cr
?
This raises another Q. to me: at the same cr
, e.g. cr=1.0
, shouldn't the number of trainable parameters in MinkowskiNet be lower than the corresponding SPVCNN? (B/c SPVCNN uses point transformation MLPs whereas MinkowskiNet does not.) However, in the pre-trained models you released, both models seem to be equal in number of parameters; am I missing something?
I see, that clarifies it. Thanks!
Hi, thank you for the insightful work! I had a (potentially dumb) question regarding the comparison of MinkowskiNet and SPVCNN (without the NAS): I see that you provide
cr
parameter to control the channel ratio in order to control the width of the networks. Am I correct in my understanding that, for this Figure, when comparing MinkowskiNet and SPVCNN under same MACs, thecr
for both models are different?