LMD0311 / DAPT

[CVPR 2024] Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis
https://arxiv.org/abs/2403.01439
Apache License 2.0
169 stars 5 forks source link

Can not reproduce the reported results. #2

Closed leoli646 closed 4 months ago

leoli646 commented 5 months ago

Thanks to the great work! I am really interested in this work. However, I have encountered some difficulties while attempting to reproduce the results on Modelnet40, scanobjnn_bg & scanobjnn_hardest (w/o voting).

I've attached an image depicting the results I obtained. As you can see, the performance of the baseline (pointmae) closely matches the reported results. However, when using pointmae+dapt, there is a significant deviation from the reported values.

I conducted the experiments using a single A100 GPU with default seed value of 0 and default data_aug (PointcloudScaleAndTranslate), unless otherwise specified.

I eagerly await your response and guidance on this matter.

image image
LMD0311 commented 5 months ago

Thanks for your interest!

In our experiments, we didn't tune the seeds. I've just re-cloned my repository code and re-ran the experiment with the newest configuration and --seed 0 for Modelnet40 & scanobjnn_hardest, obtaining the same results as in the paper. seed results log
Scan_hardest 0 85.08 20240327_112141.log
ModelNet40 0 93.48 20240327_112744.log

I will update the configuration parameters and training log in the README.md file, and you can retry it later. You can also test it by downloading the weights provided in the README.md.

We have also noticed that different hardware platforms (GPUs, etc.) can impact the experimental results (even different GPU models on the same device may produce different results), probably because of variations in how certain arithmetic operations are conducted, which is a common occurrence. Hence, all experimental results are achieved by a single RTX 4090.