Closed VitaLemonTea1 closed 8 months ago
Hi, I have the same question.
The VQ-VAE was trained for 200 epochs on 8 v100 GPUs. The batch size is 1 for per GPU. After training I got the results in the following:
Then for the second stage, the checkpoint of VQ-VAE (epoch_200.pth) was loaded. The OccWorld was trained for another 200 epochs on 8 v100 GPUs. After training I run the eval_metric_stp3.py
and got the final results:
However, the result seemed to be different from that announced in the paper:
I have the same question. Could the authors provide the training log and testing log? Thanks!
Thank you for your interest! I'm very sorry for replying so late. I have provided the training and testing logs for the github repository code. The results maybe a bit different due to random numbers. I also provide the model used in the article and its evaluation log. Also, recommend selecting the best vqvae model for the occworld training.
Hi, Thanks for your help. I just finished training with 4 x 3090. After evaluating, I see that the result has a big different from yours. I wander if you can tell me where can I change the tokenizer settings and other hyperparameters. Thanks!