Open Cindy0725 opened 6 months ago
ImGeoNet did not release its code and pretrained weight, so I directly cite its performance shown in its paper on the two datasets, I also do not know its real result on my experiment settings, maybe the settings of my experiment on ImVoxelNet, NeRFDet, and my method is different from ImGeoNet. The resize of input image is simply for convenience, make it the same as ScanNet.
ImGeoNet did not release its code and pretrained weight, so I directly cite its performance shown in its paper on the two datasets, I also do not know its real result on my experiment settings, maybe the settings of my experiment on ImVoxelNet, NeRFDet, and my method is different from ImGeoNet. The resize of input image is simply for convenience, make it the same as ScanNet.
Thank you for the quick reply! But I just downloaded the Arxiv version of ImGeoNet, it seems like the performance of ImvoxelNet on ARKitScenes dataset in ImGeoNet paper is 58.0 instead of 27.3 (mAP 0.25).
I run the ImVoxelNet code on my experiment setting and get the 27.3 result, I do not know the settings of ImGeoNet, it is not open source.
I run the ImVoxelNet code on my experiment setting and get the 27.3 result, I do not know the settings of ImGeoNet, it is not open source.
Okay, thanks again!
I run the ImVoxelNet code on my experiment setting and get the 27.3 result, I do not know the settings of ImGeoNet, it is not open source.
Hi, I have another question of the performance of NeRF-Det on ARKitScenes dataset. In your paper, the mAP0.25 and mAP0.5 is 39.5 and 21.9 respectively. But in the NeRF-Det paper, their mAP0.25 for the whole-scene validation set is only 26.7: I see you reproduce the result using their public code, which trick do you use to make such a high improvement? Did you use the same training/validation with NeRF-Det (like the whole-scene validation set)? And I am also wondering how many images do you use for training and testing. I see in NeRF-Det they use 50 images for training and 100 images for testing but only got 26.7 mAP0.25. Thank you very much! Looking forward to your kind reply.
I just use its provided code and run it... I use 50 for training and 100 for testing, and the same training、validation as NeRFDet provided....
@SerCharles @Cindy0725 Hi, ImGeoNet's code is now available here: https://github.com/ttaoREtw/ImGeoNet I guess the performance gap of ImVoxelNet is from this line: https://github.com/ttaoREtw/ImGeoNet/blob/main/mmdetection3d/mmdet3d/datasets/arkit_dataset.py#L39
Hi, it's great work! After reading the paper, I have a question about the performance of compared method: In the supplementary materials, you provide the mAP0.25 and mAP0.5 for ARKitScenes dataset: The mAP0.25 and mAP0.5 of baseline method ImVoxelNet is 27.3 and 8.8, but in ImGeoNet paper, the performance of ImVoxelNet is very high: I also notice that in paper Nerf-Det, the performance of Imvoxelnet is:
Why there is so much difference for the same method on the same dataset? Is there any trick when training on ARKitScenes dataset? I also notice that you resize the input image of ARKitScenes to (640,480), which is much larger than the original size (192, 256). What's the purpose of this transformation? Will it help improve the performance?
Looking forward to your reply. Thank you very much!