hlesmqh / WS3D

Official version of 'Weakly Supervised 3D object detection from Lidar Point Cloud'(ECCV2020)
MIT License
122 stars 11 forks source link

where i can get label_w/label.txt? #3

Closed zx970505 closed 3 years ago

hlesmqh commented 3 years ago

@zx970505 Thanks for your attention! The file label.txt saves the BEV click annotation created by running annotation.py and labeling the dataset by yourself. (For more details, you should better go deep into our paper.) You can also download the BEV click annotation used in our paper by download from here. Of course, we have already processed it to KITTI label format, but only x,z information available.

zx970505 commented 3 years ago

@hlesmqh OK. thanks very much.

zx970505 commented 3 years ago

@hlesmqh Thanks for your reply,I have solved this problem. But It will take me much time to get the label.txt. I know I should label 3712 images. So could you give me the copy of the the BEV click annotation . I can not open the link you gave me before.

hlesmqh commented 3 years ago

Sorry for the inconvenience! We already repaired the link.

zx970505 commented 3 years ago

Ok ,thank you very much!

zx970505 commented 3 years ago

@hlesmqh hi, when I run train_rpn.py, I meet another issue that I do not have the file named aug_gt_database.pkl . By the way , can I add your WeChat or email for communication?

hlesmqh commented 3 years ago

@zx970505 You can set GT_AUG_ENABLED in cfg file to False. And sure, you can contact me by email: mengqinghao1995@live.com.

zx970505 commented 3 years ago

OK,Thanks .I will try it again following your github.

------------------ 原始邮件 ------------------ 发件人: "hlesmqh"<notifications@github.com>; 发送时间: 2020年11月18日(星期三) 中午11:11 收件人: "hlesmqh/WS3D"<WS3D@noreply.github.com>; 抄送: "墨北"<2293088324@qq.com>; "Mention"<mention@noreply.github.com>; 主题: Re: [hlesmqh/WS3D] where i can get label_w/label.txt? (#3)

@zx970505 You can set GT_AUG_ENABLED in cfg file to False. And sure, you can contact me by email: mengqinghao1995@live.com.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

zx970505 commented 3 years ago

@hlesmqh HI,Sorry to disturb you again, I want to know how should I test after running python ./tools/train_cascade1.py? And when I run python ./tools/train_cascade1.py --weakly_num=500, I got a problem that i do not have val_boxes.pkl and small_val_boxes.pkl.

hlesmqh commented 3 years ago

@zx970505 You can use eval_auto.py and choose you model for evaluation. The pickle files are generated by generate_box_dataset.py. For small_val and val split, please change Line 23 to its name. Or for training faster without evaluation, please comments the code in tools/train_utils/train_utils.py, from Line 588 to 604.

zx970505 commented 3 years ago

@hlesmqh ,I use my model for evaluation,but i got the following results: 2020-11-28 22:52:12,704 INFO -------------------performance of epoch no_number--------------------- 2020-11-28 22:52:12,704 INFO 2020-11-28 22:52:12.704402 2020-11-28 22:52:12,704 INFO final average detections: 0.000 2020-11-28 22:52:12,704 INFO final average rpn_iou refined: 0.000 2020-11-28 22:52:12,705 INFO final average cls acc: 0.000 2020-11-28 22:52:12,705 INFO final average cls acc refined: 0.000 2020-11-28 22:52:12,705 INFO total bbox recall(thresh=0.100): 0 / 0 = 0.000000 2020-11-28 22:52:12,705 INFO total bbox recall(thresh=0.300): 0 / 0 = 0.000000 2020-11-28 22:52:12,705 INFO total bbox recall(thresh=0.500): 0 / 0 = 0.000000 2020-11-28 22:52:12,705 INFO total bbox recall(thresh=0.700): 0 / 0 = 0.000000 2020-11-28 22:52:12,706 INFO total bbox recall(thresh=0.900): 0 / 0 = 0.000000 2020-11-28 22:52:12,706 INFO Averate Precision: 2020-11-28 22:52:23,508 INFO Car AP@0.70, 0.70, 0.70: bbox AP:0.0000, 0.0000, 0.0000 bev AP:0.0000, 0.0000, 0.0000 3d AP:0.0000, 0.0000, 0.0000 Car AP@0.70, 0.50, 0.50: bbox AP:0.0000, 0.0000, 0.0000 bev AP:0.0000, 0.0000, 0.0000 3d AP:0.0000, 0.0000, 0.0000

2020-11-28 22:52:23,512 INFO result is saved to: /media/student3/46208D3E208D35C9/ZX/WS3D-master/output/rcnn/535.1_fulldata500_s500x1.00_40000/ckpt/checkpoint_iter_399843.31/eval/epoch_no_number/train

what can i do for this? I just changed the ckpt for rpn and rcnn in eval_auto.py.

zx970505 commented 3 years ago

@hlesmqh and I have found that the folder of 'WS3D-master/output/rcnn/535.1_fulldata500_s500x1.00_40000/ckpt/checkpoint_iter_399843.31/eval/epoch_no_number/train/final_result/data' is empty. How can I do for this ? Looking forward to your reply!

hlesmqh commented 3 years ago

@zx970505 I'm afraid there exist some problem when loading your model. The eval_auto.py is prepare for evaulating whole model. For only cascade1, you should change the result saved from cascade-later(which didnt trained) to cascade1(your trained output). I suggest you do debug and check whether you have result at Line 389 rcnn_cls and Line 391 rcnn_box3d, and saving them for evaulation. (comment Line 397 and change Line 410 to rcnn_cls)

zx970505 commented 3 years ago

@hlesmqh Thanks for your reply. I successfully solved this problem.Thanks again for your help.

zx970505 commented 3 years ago

@hlesmqh Hi ,I am sorry to disturb you again. when i run eval_auto.py I changed the VISUAL to True ,but I got the error that i donot have the visual/rpn.jpg. The same problem about rcnn.jpg. How should I do for the Visualization? And I have sent the e-mail to you use chinese for some problems about the paper. Finally, I want to ask if the ‘iou loss’ you used is in this paper ‘IoU Loss for 2D/3D Object Detection’ . Becauese I want to change it if it will imporve. Looking forward to your reply!

hlesmqh commented 3 years ago

@zx970505 Please create a folder named visual in root directory. Our 'iou loss' is a bit different with ‘IoU Loss for 2D/3D Object Detection'. You can check this in our paper and our referenced paper [27][40].

zx970505 commented 3 years ago

@hlesmqh thanks for your reply.It works and I will continue to study it.

zx970505 commented 3 years ago

@hlesmqh Hello, I ran many experiments and found that the experimental results on the KITTI verification set are about 3% higher than your published ones. I just changed the batchsize to half of your previous setting. There are other reasons for this result ? 图片1

zx970505 commented 3 years ago

@hlesmqh And how can i set the 'weakly_scene' and 'weakly_ratio' in train_cascade1.py and train_cascade_later.py? I tried many combinations, but I couldn’t get the results published in your paper. can you help me ?Looking forward to your reply.

hlesmqh commented 3 years ago

@zx970505 Please make sure that you have already set ‘weakly_scene’ to 500 and 'weakly_ratio' to 0.25 in every stage of you training processes. The small batch_size may causes some instability and influence the generalization ability. Maybe you can try test set for estimation. The training of stage-2 needs the generated box dataset from your stage-1 model. The ‘weakly_scene’ changes the number of scene to select 3D box annotation from, and the 'weakly_ratio' depends the ratio of box you chosed. You can set them in 'train_cascade1.py' and 'train_cascade_later' args.

zx970505 commented 3 years ago

@hlesmqh OK ,thank you for your reply,i change the set and got the result,but lower than your published ones. By the way, how can i try test set? Just change the cfg.TEST.SPLIT to test in weaklyIOUN.yaml? But when i try this,i meet a problem as follows:

eval: 0%| | 0/7518 [00:00<?, ?it/s]Traceback (most recent call last): File "eval_auto.py", line 956, in eval_single_ckpt(root_result_dir) File "eval_auto.py", line 845, in eval_single_ckpt eval_one_epoch_joint(model, test_loader, iter_id, root_result_dir, logger) File "eval_auto.py", line 170, in eval_one_epoch_joint for data in dataloader: File "/home/student1/anaconda3/envs/zx/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 346, in next data = self.dataset_fetcher.fetch(index) # may raise StopIteration File "/home/student1/anaconda3/envs/zx/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/student1/anaconda3/envs/zx/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/media/amax/46208D3E208D35C9/ZX/WS3D-master/lib/datasets/kitti_rcnn_dataset.py", line 388, in getitem return self.get_rpn_sample(index) File "/media/amax/46208D3E208D35C9/ZX/WS3D-master/lib/datasets/kitti_rcnn_dataset.py", line 465, in get_rpn_sample noise_gt_obj_list = self.filtrate_objects(self.get_noise_label(sample_id)) File "/media/amax/46208D3E208D35C9/ZX/WS3D-master/lib/datasets/kitti_dataset.py", line 63, in get_noise_label assert os.path.exists(label_file) AssertionError

I guess that we need the label_bev about test set. Looking forward to your reply.

zx970505 commented 3 years ago

@hlesmqh And I download the pretrained model(Car) you gived of WS3D of Stage-1 and Stage-2, But the results are lower than you published. the results as follows: 2020-12-03 15:03:05,915 INFO -------------------performance of epoch no_number--------------------- 2020-12-03 15:03:05,916 INFO 2020-12-03 15:03:05.915926 2020-12-03 15:03:05,916 INFO final average detections: 5.203 2020-12-03 15:03:05,916 INFO final average rpn_iou refined: 0.000 2020-12-03 15:03:05,916 INFO final average cls acc: 0.000 2020-12-03 15:03:05,917 INFO final average cls acc refined: 0.000 2020-12-03 15:03:05,917 INFO total bbox recall(thresh=0.100): 13152 / 14326 = 0.918051 2020-12-03 15:03:05,917 INFO total bbox recall(thresh=0.300): 13015 / 14326 = 0.908488 2020-12-03 15:03:05,917 INFO total bbox recall(thresh=0.500): 12839 / 14326 = 0.896203 2020-12-03 15:03:05,918 INFO total bbox recall(thresh=0.700): 11155 / 14326 = 0.778654 2020-12-03 15:03:05,918 INFO total bbox recall(thresh=0.900): 901 / 14326 = 0.062893 2020-12-03 15:03:05,918 INFO Averate Precision: 2020-12-03 15:03:28,982 INFO Car AP@0.70, 0.70, 0.70: bbox AP:89.3942, 87.9237, 87.6911 bev AP:87.8136, 84.0249, 78.3930 3d AP:80.9057, 72.6973, 67.1388 aos AP:89.31, 87.66, 87.29 Car AP@0.70, 0.50, 0.50: bbox AP:89.3942, 87.9237, 87.6911 bev AP:89.7113, 88.4285, 88.4014 3d AP:89.6952, 88.3091, 88.2376 aos AP:89.31, 87.66, 87.29

hlesmqh commented 3 years ago

@zx970505 Hello, sorry for the late. If you want to try test, please change the args.test in eval_auto.py to True.Then, it will jumps the label loading and directly generates results. For the strange performance of pretrain models, it may becasue of you are using different pytorch version from the one we trained our model:). You can try torch=1.1.0.

zx970505 commented 3 years ago

@hlesmqh Thanks, I follow your guidance,But I found that the results of the test set are similar to the results of the val set.I do not know why? And I don’t really understand Table 4 in the paper. For example, how can I change the parameters to get 'More precisely annotated BEV maps'. And how is '3,712 weakly labeled scenes+534 precisely annotated instances' different from ‘3,712 weakly labeled scenes+25% precisely annotated instances’? I think ‘534 precisely annotated instances’ is the same to '25% precisely annotated instances'. Looking forward to your reply. 4