SimingYan / IAE

[ICCV 2023] "Implicit Autoencoder for Point-Cloud Self-Supervised Representation Learning"
99 stars 17 forks source link

concerning reproducing the result of table 3 #11

Closed Daniellli closed 1 year ago

Daniellli commented 1 year ago

i load the pretrained model and finetune on the votenet using your codebase. However, the performance in Table 3 can not be reproduced.
the finetune script i use is

python train.py \
--dataset scannet --log_dir log_scannet \
--num_point 40000 --no_height \
--pre_checkpoint_path=~/pretrained_models/scannet.pt \
--batch_size=16

the train script i use is

python train.py --dataset scannet --log_dir log_scannet0 --num_point 40000 --no_height --batch_size=16 

and the results i got is: <!DOCTYPE html>


  mAP@0.25 mAP@0.50
load the pretrain model 58.79 35.26
train from scrach 55.61 32.92
     

Is anything wong?

appreciate your help.

SimingYan commented 1 year ago

Hi,

Could you please try to reduce the batch_size to 8? The batch size matters in this case.

Also, if you are familiar with votenet and their original repo, you will find votenet training is very unstable. Other self-supervised learning methods also suffer from this issue. Therefore, we are extending our work to a better backbone.