Closed harryseely closed 1 year ago
You should check the config file and change all parameters related to depth
to 6.
I did so and I am still getting the same error.. are you just referring to the paramaters named "depth" and "full_depth" or are there other parameters that could affect this? As far as I can tell, the only places in the code where depth/full_depth need to be specified are when you build the octree 1) using the Transform class; 2) and when data enters the network:
forward(self, data: torch.Tensor, octree: Octree, depth: int)
.
Are there other places in the code where depth must be specified to avoid this error? Have you tested LeNet using different depths?
Please use this config file. cls_m40_d6.zip
I see what the issue is. The parameter I had not updated was the stages
in LeNet model. Based on my testing, if you set stages=4
and depth=6
the model works. However, if I set stages=3
and depth=6
(what I was doing before), I get the shape mismatch error. The following configurations work for me when using the LeNet architecture:
stages=5, depth=7
stages=6, depth=8
stages=7, depth=9
It appears that the stages
and depth
parameters are dependent on each other. Are these parameters supposed to be related? If so what is the required relationship?
The reason I am interested is because I would like to tune depth/stages as hyperparameters but I want to avoid errors in my tuning.
Yes, you are right, these two parameters are related.
The reason is that the FC layer in the LeNet only deals with a fixed number of voxels (64), which corresponds to the 2nd layer of an octree. So if the octree is deeper, the stages should increase accordingly to downsample the octree so that the octree depth is 2 when the features come to the FC layer.
I am trying to increase the octree depth for LeNet from 5 (depth used in example), but I keep getting an input shape error as soon as the data hits the first linear layer in the network. With the same data and code, increasing the depth from 5 to 6 works with HRNet.
line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (16x32768 and 4096x128)
I am using a dataset with 4096 points and a batch size of 16. Any idea what might be causing this?