Closed 96lives closed 1 year ago
One odd thing is that SparseResUNet42 seems to occur no errors while SparseResNet21D does...
Hi @96lives,
Thanks for pointing out the issue. You may set the kmap_mode
to hashmap
using the following code snippet:
import torchsparse.nn.functional as F
F.set_kmap_mode("hashmap")
After this modification the error should go away. Our default hashmap construction method (hashmap_on_the_fly
) is designed for large-scale inputs. The hashmap
mode is slightly slower but is expected to also work well for small inputs. In fact, if the problem only has 1000 input points, sparse convolution does not provide a big advantage over point-based primitives.
Besides, SparseResNet21D is a detection backbone which uses kernel_size=3, stride=2 downsampling layers. These layers will dilate the activated regions in the input (we follow the definition of SpConv). SparseResUNet42, in contrast, is a segmentation backbone which only uses kernel_size=2, stride=2 downsampling layers. The activated regions will not dilate after each downsampling layer. So this is why you probably will see "insufficient hashtable capacity" when running SparseResNet21D because we currently construct the hash tables based on initial input sizes in hashmap_on_the_fly
.
Best, Haotian
Thanks! It does work
Is there an existing issue for this?
Current Behavior
Hi, I'm trying to get familiar with torchsparse 2.1.0, so I've ran the
examples/backbone.py
. I've changed the code so that the batch dimension is in the first dimension as mentioned in the docs.But when I run the code, I get an error that hashtable is not sufficient (as the following). It does not occur if I set the number of points to 100 (by changing
input_size
to 100) from 1000. Could you help me with this? I figure this must be a bug, cause 1000 points is heavily used in the literature. Thanks in advance :)Expected Behavior
No response
Environment
Anything else?
No response