Closed xinghuokang closed 3 years ago
Thanks for your interest in our work. we produce the visualization results by manually setting the bounding box color in meshlab according to the color bar.
Thanks for your reply, Normally, object detections should give bbox axis and its classify, why not figure them by their classify results?
Hi, In the demo/pcd_demo.py, and in inference.py, I can not find 3D NMS process?
Hi, the NMS process is done here https://github.com/cheng052/BRNet/blob/4f9cf72f49757b8fb74e7eae50cd2c4ea3cb7f83/mmdet3d/models/roi_heads/bbox_heads/br_bbox_head.py#L112 called by inference_detector() in pcd_demo.py The NMS is done in the model forward pass in test mode
Firstly, thanks for your reply. Hi, I have some questions here, I run the demo "CUDA_VISIBLE_DEVICES=0 python demo/pcd_demo.py demo/sunrgbd_000017.bin demo/brnet_8x1_sunrgbd-3d-10class.py checkpoints/brnet_8x1_sunrgbd-3d-10class_trained.pth", I also print the config information as below:
Everytime, I run pcd_demo.py, it gives different results, what's wrong with it?
In the print config, use_nms=False(the small red box in the picture), why(I do not modify any code)? This setting is in rpn (region proposal network), the nms process is done after the rcnn network.
can you explain the result(the big red box in the picture), "3d_box", why print so many lines? This is related to the setting per_class_proposal=True in rcnn config, you can check the appendix in VoteNet paper for details
Every time, I run pcd_demo.py, it gives different results, what's wrong with it? The detector suffers from some randomness. For example, the number of points for the network input in the SUN RGB-D dataset is set to be 20000, but the number of points in the raw point cloud is more than 20000. Thus a random sample should be done before feeding the points into the network, the random sample process can bring randomness to the detection results. It is a common problem in the related work, as in VoteNet, H3DNet ...
Hi, Cheng, Thank you very much! I need to read paper in details, then study the code. In your opinion, what's the current research status of 3d object detection on point cloud or 3D sementic segmentation? @cheng052
As for the 3D semantic segmentation task, most of the works are done on the indoor dataset(ScanNet and so on), and the 3D semantic segmentation for the outdoor environment is still worth working on.
Also, you can pay attention to many related topics about 3d detection, such as efficiency optimization, domain adaptation, semi-supervised ... There are many other interesting topics besides getting better performance on a specific dataset.
Thank you for your sharing, I've learned.
Hi,there are so many detect bbox, Is there any way to get rid of it.
You can try to use a larger confidence threshold(such as 0.5) and a larger NMS threshold
OK,I will try it.
When I prepare the code, l,w,h not time to 2. But, BRNet predicts result is time to 2 compared with labels, Why? The above predict result is 000002 sample, its label is below, It looks exactly 2 times to label (in l,w,h).
Hi, Cheng I have read source code, but I can not get below result in your README.md.
But, when I run on my machine, i got below result. All the detect bboxes are green, why? Abviously, there are two object(table and chair). Look forward your reply, thanks!