Jieqianyu / SSC-RS

Implementation of IROS23 paper - "SSC-RS: Elevate LiDAR Semantic Scene Completion with Representation Separation and BEV Fusion"
MIT License
27 stars 3 forks source link

question about validate results #4

Closed shenxiaowrj closed 1 year ago

shenxiaowrj commented 1 year ago

I processed the data and successfully ran the validate.py file with the weights you gave, but the results were not very normal.

SSC_RS/SSC-RS/validate.py --weights weights/weights_epoch_035.pth --dset_root data/SemanticKITTI/dataset/sequences 2023-07-24 13:25:01,920 -- ============ Validation weights: "weights/weights_epoch_035.pth" ============

=> Parsing SemanticKITTI train parsing seq 00 parsing seq 01 parsing seq 02 parsing seq 03 parsing seq 04 parsing seq 05 parsing seq 06 parsing seq 07 parsing seq 09 parsing seq 10 Using 19130 scans from sequences [0, 1, 2, 3, 4, 5, 6, 7, 9, 10] Is aug: True => Parsing SemanticKITTI valid parsing seq 08 Using 4071 scans from sequences [8] Is aug: False => Parsing SemanticKITTI test parsing seq 11 parsing seq 12 parsing seq 13 parsing seq 14 parsing seq 15 parsing seq 16 parsing seq 17 parsing seq 18 parsing seq 19 parsing seq 20 parsing seq 21 Using 0 scans from sequences [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21] Is aug: False 2023-07-24 13:25:04,357 -- => Loading network architecture... 2023-07-24 13:25:04,489 -- => Model Parameters: 23.069487 M 2023-07-24 13:25:04,489 -- => Loading network weights... ignore weight of mistached shape in key sem_branch.conv1_block.spconv_layers.0.layers_in.0.weight ignore weight of mistached shape in key sem_branch.conv1_block.spconv_layers.0.layers.0.weight ignore weight of mistached shape in key sem_branch.conv1_block.spconv_layers.0.layers.3.weight ignore weight of mistached shape in key sem_branch.conv2_block.spconv_layers.0.layers_in.0.weight ignore weight of mistached shape in key sem_branch.conv2_block.spconv_layers.0.layers.0.weight ignore weight of mistached shape in key sem_branch.conv2_block.spconv_layers.0.layers.3.weight ignore weight of mistached shape in key sem_branch.conv3_block.spconv_layers.0.layers_in.0.weight ignore weight of mistached shape in key sem_branch.conv3_block.spconv_layers.0.layers.0.weight ignore weight of mistached shape in key sem_branch.conv3_block.spconv_layers.0.layers.3.weight 2023-07-24 13:25:04,740 -- => Model loaded at weights/weights_epoch_035.pth 2023-07-24 13:25:04,762 -- => Passing the network on the validation set... 0%| | 0/2036 [00:00<?, ?it/s]/media/re/2384a6b4-4dae-400d-ad72-9b7044491b55/SSC_LiDAR/SSC_RS/SSC-RS/networks/preprocess.py:126: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). self.lims[0][1], self.sizes[0] // scale, with_res=True) /media/re/2384a6b4-4dae-400d-ad72-9b7044491b55/SSC_LiDAR/SSC_RS/SSC-RS/networks/preprocess.py:128: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). self.lims[1][1], self.sizes[1] // scale, with_res=True) /media/re/2384a6b4-4dae-400d-ad72-9b7044491b55/SSC_LiDAR/SSC_RS/SSC-RS/networks/preprocess.py:130: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). self.lims[2][1], self.sizes[2] // scale, with_res=True) /media/re/2384a6b4-4dae-400d-ad72-9b7044491b55/SSC_LiDAR/SSC_RS/SSC-RS/networks/semantic_segmentation.py:136: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (input_coords[:, 1:] // ps).int()], dim=1) 2023-07-24 13:37:47,493 -- => [Total Validation Loss = 20.711219787597656] 2023-07-24 13:37:47,493 -- => [Scale 1_1: Loss = 3.362502 - mIoU = 0.083424 - IoU = 0.503165 - P = 0.822360 - R = 0.564524 - F1 = 0.669474] 2023-07-24 13:37:47,493 -- => Training set class-wise IoU: 2023-07-24 13:37:47,493 -- => IoU car: 0.001461 2023-07-24 13:37:47,494 -- => IoU bicycle: 0.000572 2023-07-24 13:37:47,494 -- => IoU motorcycle: 0.000011 2023-07-24 13:37:47,494 -- => IoU truck: 0.000000 2023-07-24 13:37:47,494 -- => IoU other-vehicle: 0.000003 2023-07-24 13:37:47,494 -- => IoU person: 0.000588 2023-07-24 13:37:47,494 -- => IoU bicyclist: 0.000000 2023-07-24 13:37:47,494 -- => IoU motorcyclist: 0.000000 2023-07-24 13:37:47,494 -- => IoU road: 0.281975 2023-07-24 13:37:47,494 -- => IoU parking: 0.000048 2023-07-24 13:37:47,494 -- => IoU sidewalk: 0.141962 2023-07-24 13:37:47,494 -- => IoU other-ground: 0.000886 2023-07-24 13:37:47,494 -- => IoU building: 0.144485 2023-07-24 13:37:47,495 -- => IoU fence: 0.067548 2023-07-24 13:37:47,495 -- => IoU vegetation: 0.279371 2023-07-24 13:37:47,495 -- => IoU trunk: 0.133334 2023-07-24 13:37:47,495 -- => IoU terrain: 0.322580 2023-07-24 13:37:47,495 -- => IoU pole: 0.157996 2023-07-24 13:37:47,495 -- => IoU traffic-sign: 0.052228 2023-07-24 13:37:47,495 -- => ============ Network Validation Done ============ 2023-07-24 13:37:47,495 -- Inference time per frame is 0.0717 seconds

Is this result normal? If not, how can I get normal results? Thanks!

shenxiaowrj commented 1 year ago

image

Jieqianyu commented 1 year ago

Please check the task setting of SSC carefully. The voxel data is obtained every 5 frames. The data organization of voxels should be like this. Screenshot from 2023-07-24 14-20-42

Jieqianyu commented 1 year ago

We paste our validation logs here: 2023-07-03 19:20:56,753 -- ============ Validation weights: "weights/weights_epoch_035.pth" ============

2023-07-03 19:20:57,330 -- => Loading network architecture... 2023-07-03 19:20:57,669 -- => Model Parameters: 23.069487 M 2023-07-03 19:20:57,670 -- => Loading network weights... 2023-07-03 19:20:57,917 -- => Model loaded at weights/weights_epoch_035.pth 2023-07-03 19:20:57,981 -- => Passing the network on the validation set... 2023-07-03 19:21:15,271 -- => Iteration [20/408], Train Losses: total = 7.769538, semantic_1_1 = 2.714619, semantic_seg = 3.391134, scene_completion = 1.663784 2023-07-03 19:21:29,465 -- => Iteration [40/408], Train Losses: total = 7.397734, semantic_1_1 = 2.255777, semantic_seg = 3.367792, scene_completion = 1.774165 2023-07-03 19:21:43,023 -- => Iteration [60/408], Train Losses: total = 6.186661, semantic_1_1 = 2.106543, semantic_seg = 2.635618, scene_completion = 1.444500 2023-07-03 19:21:56,042 -- => Iteration [80/408], Train Losses: total = 6.596225, semantic_1_1 = 2.274071, semantic_seg = 2.855760, scene_completion = 1.466394 2023-07-03 19:22:08,997 -- => Iteration [100/408], Train Losses: total = 8.324954, semantic_1_1 = 2.572615, semantic_seg = 3.648911, scene_completion = 2.103428 2023-07-03 19:22:22,224 -- => Iteration [120/408], Train Losses: total = 7.166960, semantic_1_1 = 2.730058, semantic_seg = 2.921300, scene_completion = 1.515602 2023-07-03 19:22:35,628 -- => Iteration [140/408], Train Losses: total = 6.401707, semantic_1_1 = 2.385809, semantic_seg = 2.518653, scene_completion = 1.497244 2023-07-03 19:22:49,329 -- => Iteration [160/408], Train Losses: total = 6.057355, semantic_1_1 = 2.361865, semantic_seg = 2.283807, scene_completion = 1.411683 2023-07-03 19:23:03,801 -- => Iteration [180/408], Train Losses: total = 7.496494, semantic_1_1 = 2.764786, semantic_seg = 3.368202, scene_completion = 1.363505 2023-07-03 19:23:17,301 -- => Iteration [200/408], Train Losses: total = 7.247997, semantic_1_1 = 2.461466, semantic_seg = 3.067770, scene_completion = 1.718760 2023-07-03 19:23:29,261 -- => Iteration [220/408], Train Losses: total = 7.005735, semantic_1_1 = 2.414611, semantic_seg = 2.918452, scene_completion = 1.672672 2023-07-03 19:23:41,878 -- => Iteration [240/408], Train Losses: total = 8.524563, semantic_1_1 = 2.659985, semantic_seg = 3.559291, scene_completion = 2.305286 2023-07-03 19:23:54,646 -- => Iteration [260/408], Train Losses: total = 7.630559, semantic_1_1 = 2.625789, semantic_seg = 3.145986, scene_completion = 1.858784 2023-07-03 19:24:09,235 -- => Iteration [280/408], Train Losses: total = 7.948913, semantic_1_1 = 2.546989, semantic_seg = 3.437655, scene_completion = 1.964267 2023-07-03 19:24:24,197 -- => Iteration [300/408], Train Losses: total = 7.606469, semantic_1_1 = 2.668778, semantic_seg = 3.010377, scene_completion = 1.927313 2023-07-03 19:24:37,897 -- => Iteration [320/408], Train Losses: total = 6.322242, semantic_1_1 = 2.339064, semantic_seg = 2.128008, scene_completion = 1.855170 2023-07-03 19:24:50,936 -- => Iteration [340/408], Train Losses: total = 7.515604, semantic_1_1 = 2.725264, semantic_seg = 2.152425, scene_completion = 2.637915 2023-07-03 19:25:04,475 -- => Iteration [360/408], Train Losses: total = 7.254798, semantic_1_1 = 2.617983, semantic_seg = 2.567984, scene_completion = 2.068831 2023-07-03 19:25:18,298 -- => Iteration [380/408], Train Losses: total = 5.758758, semantic_1_1 = 1.943281, semantic_seg = 2.403465, scene_completion = 1.412012 2023-07-03 19:25:32,265 -- => Iteration [400/408], Train Losses: total = 6.738690, semantic_1_1 = 2.245677, semantic_seg = 3.269815, scene_completion = 1.223199 2023-07-03 19:25:38,519 -- => [Total Validation Loss = 6.994754791259766] 2023-07-03 19:25:38,532 -- => [Scale 1_1: Loss = 2.427781 - mIoU = 0.247551 - IoU = 0.586171 - P = 0.784827 - R = 0.698411 - F1 = 0.739102] 2023-07-03 19:25:38,532 -- => Training set class-wise IoU: 2023-07-03 19:25:38,532 -- => IoU car: 0.467544 2023-07-03 19:25:38,533 -- => IoU bicycle: 0.050743 2023-07-03 19:25:38,533 -- => IoU motorcycle: 0.069411 2023-07-03 19:25:38,533 -- => IoU truck: 0.415105 2023-07-03 19:25:38,533 -- => IoU other-vehicle: 0.198014 2023-07-03 19:25:38,533 -- => IoU person: 0.061751 2023-07-03 19:25:38,533 -- => IoU bicyclist: 0.015197 2023-07-03 19:25:38,534 -- => IoU motorcyclist: 0.000000 2023-07-03 19:25:38,534 -- => IoU road: 0.737917 2023-07-03 19:25:38,534 -- => IoU parking: 0.265869 2023-07-03 19:25:38,534 -- => IoU sidewalk: 0.453476 2023-07-03 19:25:38,534 -- => IoU other-ground: 0.021464 2023-07-03 19:25:38,534 -- => IoU building: 0.410471 2023-07-03 19:25:38,534 -- => IoU fence: 0.157856 2023-07-03 19:25:38,535 -- => IoU vegetation: 0.425750 2023-07-03 19:25:38,535 -- => IoU trunk: 0.222504 2023-07-03 19:25:38,535 -- => IoU terrain: 0.505636 2023-07-03 19:25:38,535 -- => IoU pole: 0.179233 2023-07-03 19:25:38,535 -- => IoU traffic-sign: 0.045535

Jieqianyu commented 1 year ago

"ignore weight of mistached shape in key sem_branch.conv1_block.spconv_layers.0.layers_in.0.weight ignore weight of mistached shape in key sem_branch.conv1_block.spconv_layers.0.layers.0.weight ignore weight of mistached shape in key sem_branch.conv1_block.spconv_layers.0.layers.3.weight ignore weight of mistached shape in key sem_branch.conv2_block.spconv_layers.0.layers_in.0.weight ignore weight of mistached shape in key sem_branch.conv2_block.spconv_layers.0.layers.0.weight ignore weight of mistached shape in key sem_branch.conv2_block.spconv_layers.0.layers.3.weight ignore weight of mistached shape in key sem_branch.conv3_block.spconv_layers.0.layers_in.0.weight ignore weight of mistached shape in key sem_branch.conv3_block.spconv_layers.0.layers.0.weight ignore weight of mistached shape in key sem_branch.conv3_block.spconv_layers.0.layers.3.weight" This shows that there are some bugs in the weight loading. Please check the running environment, especially the version of spconv (v1.1 is required).

shenxiaowrj commented 1 year ago

Please check the task setting of SSC carefully. The voxel data is obtained every 5 frames. The data organization of voxels should be like this. Screenshot from 2023-07-24 14-20-42

Thank you for your detailed reply. I directly run the https://github.com/Jieqianyu/SSC-RS/blob/main/datasets/label_downsample.py to process the data and I got the result like this: 0554da278b2bbdebe3c4e6b457f9439

How can I get the voxel every 5 frames?

shenxiaowrj commented 1 year ago

"ignore weight of mistached shape in key sem_branch.conv1_block.spconv_layers.0.layers_in.0.weight ignore weight of mistached shape in key sem_branch.conv1_block.spconv_layers.0.layers.0.weight ignore weight of mistached shape in key sem_branch.conv1_block.spconv_layers.0.layers.3.weight ignore weight of mistached shape in key sem_branch.conv2_block.spconv_layers.0.layers_in.0.weight ignore weight of mistached shape in key sem_branch.conv2_block.spconv_layers.0.layers.0.weight ignore weight of mistached shape in key sem_branch.conv2_block.spconv_layers.0.layers.3.weight ignore weight of mistached shape in key sem_branch.conv3_block.spconv_layers.0.layers_in.0.weight ignore weight of mistached shape in key sem_branch.conv3_block.spconv_layers.0.layers.0.weight ignore weight of mistached shape in key sem_branch.conv3_block.spconv_layers.0.layers.3.weight" This shows that there are some bugs in the weight loading. Please check the running environment, especially the version of spconv (v1.1 is required).

After change the version of the spconv and change data, I get the right results. Thank you so much!!!

willemeng commented 5 months ago

"ignore weight of mistached shape in key sem_branch.conv1_block.spconv_layers.0.layers_in.0.weight ignore weight of mistached shape in key sem_branch.conv1_block.spconv_layers.0.layers.0.weight ignore weight of mistached shape in key sem_branch.conv1_block.spconv_layers.0.layers.3.weight ignore weight of mistached shape in key sem_branch.conv2_block.spconv_layers.0.layers_in.0.weight ignore weight of mistached shape in key sem_branch.conv2_block.spconv_layers.0.layers.0.weight ignore weight of mistached shape in key sem_branch.conv2_block.spconv_layers.0.layers.3.weight ignore weight of mistached shape in key sem_branch.conv3_block.spconv_layers.0.layers_in.0.weight ignore weight of mistached shape in key sem_branch.conv3_block.spconv_layers.0.layers.0.weight ignore weight of mistached shape in key sem_branch.conv3_block.spconv_layers.0.layers.3.weight" This shows that there are some bugs in the weight loading. Please check the running environment, especially the version of spconv (v1.1 is required).

I would like to ask if I use spconv2.x to train and reproduce, will there be a big gap in the results?

Jieqianyu commented 5 months ago

We train the model from scratch. There won't be much performance difference using spconv2.x.