CGuangyan-BIT / PointGPT

[NeurIPS 2023] PointGPT: Auto-regressively Generative Pre-training from Point Clouds
MIT License
179 stars 18 forks source link

question about the results of pretrian and finetune process #3

Closed CHANG1412 closed 11 months ago

CHANG1412 commented 1 year ago

Hi,

Thanks for the great work. I pretrained the network on ShapeNet, but only got a loss of around 30%, is this normal? I got a loss of 2% when I pretrained with pointMAE. Also, I tried to finetune on ModelNet, but got an error as followed: Traceback (most recent call last): File "main.py", line 95, in main() File "main.py", line 89, in main finetune(args, config, train_writer, val_writer) File "/home/pointGPT/PointGPT/tools/runner_finetune.py", line 176, in run_net losses.update([torch.ones_like(loss1).item()]) ValueError: only one element tensors can be converted to Python scalars

I'm just wondering what i'm missing here. Thanks a lot!

CGuangyan-BIT commented 1 year ago

For the first question, it is normal to have a higher loss compared to the Point-MAE pre-training task because our mask ratio is higher and the difficulty is greater. Therefore, there will be a higher loss.

Regarding the second question, I suspect that you are using multi-card training. I apologize that the code we uploaded was not modified for this. You just need to average the loss before this sentence, and that should resolve the issue. We will also update the code.