Closed bob35buaa closed 3 weeks ago
Have you addressed this problem or is this issue acceptable? @bob35buaa
Thanks.
Have you addressed this problem or is this issue acceptable? @bob35buaa
Thanks.
No, the author hasn't reply this question.
Sorry for the late response. I just found some time to verify the model on KIT. The training curve and accuracy are normal given the limited size of KIT. Since the configures for KIT and HumanML3D are a bit different because of the dataset size, we suggest you follow the configure from the corresponding opt.txt file. Please refer to https://github.com/EricGuo5513/momask-codes/issues/63#issuecomment-2270312111.
For the animation visualization of KIT, it's about the radius of plot_3d. https://github.com/EricGuo5513/momask-codes/commit/4038d850fb390f166033dcce9596aaddaac73016
Sorry for the late response. I just found some time to verify the model on KIT. The training curve and accuracy are normal given the limited size of KIT. Since the configures for KIT and HumanML3D are a bit different because of the dataset size, we suggest you follow the configure from the corresponding opt.txt file. Please refer to #63 (comment).
For the animation visualization of KIT, it's about the radius of plot_3d. 4038d85
Thanks! By the way, for datasets of different sizes, how should I adjust the configure such as quantize_dropout_prob and milestones? Can you provide some empirical conclusions?
I'm trying to replicate the results on the KIT dataset, but I've observed that the training and validation classification accuracy of the Masked Transformer and Residual Transformer are not very high. Specifically, the validation classification accuracy is extremely low (below 20% for Masked Transformer and below 10% for Residual Transformer), and the animation results are also unreasonable. Is this normal? This is the tensorboard logging of Masked Transformer: And this is the tensorboard logging of Residual Transformer: