Closed cv-dote closed 2 years ago
Hi thanks for your interest in our work!
The saved parameter size is not exactly the same as what is reported in the paper because the numbers reported in the paper are quantized. That is, the feature grid stores confidence vectors which represent which index should be activated in training time, which in inference time can be converted into integers through torch.argmax()
. We don't currently have the code to do that conversion for your automatically but we can implement something like that.
Thank you so much for that information!
Closing.
Thanks for this great work! I am trying to reproduce the VQAD paper results.
I train the VQAD model using the RTMV datasets with default config.
But the model size is over 20MB, not small as the paper.
Could you please show me how to calculate the correct model size?
training config
the script I used to calculate the model size
Thanks in advance!