Closed xiaodongww closed 1 year ago
Hi!
in your command, you enabled DBSCAN: general.use_dbscan=true
. DBSCAN is very slow. Disabling it results in a major performance improvement with a cost in accuracy.
Best, Jonas
Hi,
thanks for your great work. I just tried to test your released model trained on ScanNet without segments. And the command is as follows:
export OMP_NUM_THREADS=3 # speeds up MinkowskiEngine CURR_DBSCAN=0.95 CURR_TOPK=500 CURR_QUERY=150 python main_instance_segmentation.py \ general.experiment_name="validation_query_${CURR_QUERY}_topk_${CURR_TOPK}_dbscan_${CURR_DBSCAN}" \ general.project_name="scannet_eval_noseg" \ general.checkpoint='checkpoints/scannet/scannet_val.ckpt' \ general.train_mode=false \ general.eval_on_segments=false \ general.train_on_segments=true \ model.num_queries=${CURR_QUERY} \ general.topk_per_image=${CURR_TOPK} \ general.use_dbscan=true \ general.dbscan_eps=${CURR_DBSCAN}
This process takes me about 2 hours and it takes 28s to infer one scene.
Testing: 98%|█████████▊| 306/312 [1:55:24<02:45, 27.58s/it]
Is the inference speed normal or do I mistake something? How can I accelerate the inference speed when there are no segment labels?
Any suggestion will be helpful. Thanks!
BTW: The performance when evaluating without segments seems normal.
################################################################ what : AP AP_50% AP_25% ################################################################ cabinet : 0.456 0.646 0.769 bed : 0.526 0.769 0.817 chair : 0.821 0.942 0.971 sofa : 0.476 0.712 0.860 table : 0.631 0.809 0.854 door : 0.511 0.734 0.836 window : 0.395 0.613 0.804 bookshelf : 0.405 0.645 0.798 picture : 0.519 0.639 0.710 counter : 0.305 0.611 0.749 desk : 0.355 0.644 0.815 curtain : 0.483 0.727 0.854 refrigerator : 0.475 0.646 0.720 shower curtain : 0.479 0.704 0.828 toilet : 0.951 0.999 0.999 sink : 0.531 0.714 0.889 bathtub : 0.770 0.870 0.903 otherfurniture : 0.575 0.719 0.823 ---------------------------------------------------------------- average : 0.537 0.730 0.833
Hi, @xiaodongww @JonasSchult I am currently also studying this code and have encountered an issue related to memory usage during inference. I have noticed that the inference code consumes lots of memory, and my PC is equipped with 64GB memory, it is still insufficient to perform inference on the entire ScanNet validation set. Have you also faced a similar problem?
regards, zihui
Hi! S3DIS usually takes a lot of memory. I didnt encounter this issue with ScanNet. I also use a 64GB machine.
The code currently runs evaluation on all dataset predictions at the end of the validation epoch. You can rewrite the code do to it after each example. This should solve the issue in any case.
Best, Jonas
Hi,
thanks for your great work. I just tried to test your released model trained on ScanNet without segments. And the command is as follows:
This process takes me about 2 hours and it takes 28s to infer one scene.
Is the inference speed normal or do I mistake something? How can I accelerate the inference speed when there are no segment labels?
Any suggestion will be helpful. Thanks!
BTW: The performance when evaluating without segments seems normal.