Open RobinLiuZX opened 3 weeks ago
Hi,
Thanks for reaching out. The difference in batch size could indeed affect the results. In our original experiments on the ModelNet40 dataset, we used a batch size of 32 on an NVIDIA RTX A6000 GPU. A smaller batch size, such as 8, can influence model convergence and performance, especially when using techniques like batch normalization. You could try adjusting the learning rate proportionally (e.g., reduce it by a factor of 4) to see if it stabilizes the training. Additionally, ensuring the same number of total epochs and other hyperparameters would help maintain consistency with our setup.
Dear author, first of all, thank you for your excellent work! After I finished the reproducing experiment, I found that the experimental results obtained on the
ModelNet40
dataset according to the following example training and testing instructions did not seem to reach the results provided in your paper. I would like to know where I went wrong in my settings? Since I only haveRTX3090 24G
GPUs, I set thebatch_size
to 8. I wonder if this will have a bad impact on the experimental results?