LittlePey / SFD

Sparse Fuse Dense: Towards High Quality 3D Detection with Depth Completion (CVPR 2022, Oral)
Apache License 2.0
263 stars 35 forks source link

ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 12, 1]) #69

Open HuangLLL123 opened 8 months ago

HuangLLL123 commented 8 months ago

Always reporting errors on different samples,

Has anyone encountered the same problem before?

微信截图1

微信截图2

vacant-ztz commented 5 months ago

hello,have you solved the problem? and how?

HuangLLL123 commented 4 months ago

hello,have you solved the problem? and how?

no, how about you now?

vacant-ztz commented 4 months ago

I have solved this problem, after checking I found that because I am using a self-built dataset, the image size is not the same as kitti (mine is 19201080, kitti's is 1270400), so I did not modify the input and output sizes when generating the depth map resulting in a pseudo-point cloud generated that does not correspond to my image. The specific solution is to modify the code when generating the depth map to 1920*1080, and modify the w,h parameter in the sfd_head.py at about line 505 (modify it to be slightly larger than the size of the input image).

HuangLLL123 commented 4 months ago

I have solved this problem, after checking I found that because I am using a self-built dataset, the image size is not the same as kitti (mine is 1920_1080, kitti's is 1270_400), so I did not modify the input and output sizes when generating the depth map resulting in a pseudo-point cloud generated that does not correspond to my image. The specific solution is to modify the code when generating the depth map to 1920*1080, and modify the w,h parameter in the sfd_head.py at about line 505 (modify it to be slightly larger than the size of the input image).

I am using a self-built dataset,too.I changed the values of w and h(1280 960) according to your suggestion, but the problem still exists, and I found that when I increase the batch size, the problem will be alleviated, but it still exists. Could you please tell me how do you modify the code to generate the depth map to 1920 1080? Or do you have any other good suggestions? @vacant-ztz

vacant-ztz commented 4 months ago

If you want to modify the output size of the depth map, first, you need to modify the values of oheight, owidth, cwidth in SFD-TWISE-main/dataloaders/kitti_loader.py and make sure that they can be divisible by 16, and after that, you need to modify the size of pred_dep tensor in evaluate.py as the size of your output image. @HuangLLL123

HuangLLL123 commented 4 months ago

Thank you very much. I have also solved my problem through your method, but I have encountered a new problem when using my self-built dataset, The problem is as follows: File "/home/tianran/workdir/SFD/pcdet/models/roi_heads/target_assigner/proposal_target_layer.py", line 162, in subsample_rois raise NotImplementedError NotImplementedError maxoverlaps:(min=nan, max=nan) ERROR: FG=0, BG=0 I have tried many methods mentioned in the answers of other issues, such as normalizing point cloud features and reducing learning rates, but the problem has not been completely solved. Have you ever encountered this problem while using a self-built dataset? Could you please tell me your solution? @vacant-ztz

Zixiu99 commented 4 weeks ago

@HuangLLL123 maxoverlaps:(min=nan, max=nan) ERROR: FG=0, BG=0 Hi, I'm experiencing the same problem on self-built dataset, have you solved it please?