Open Achedgrain opened 5 years ago
Hi @Achedgrain Thanks for your interest in our work. Could you let me know what is the performance of your backbone (Global Model) after training the architecture with global average pooling here. Have you followed the preprocessing steps (rescaling the videos, etc...)? Fabien
@fabienbaradel thank you so much for the reply. I get 96.1881% performance for the Global Model. I have also done the preprocessing steps exactly as mentioned.
Thanks for your reply. For the Global Model you are reporting the performance on the training set right? You should report the performance on the test set which should be much lower.
Sorry for my mistake. With batch size of 10, I get performance of 2.85% and with batch size of 20, I get 3.33%.
Sorry for my mistake. With batch size of 10, I get performance of 2.85% and with batch size of 20, I get 3.33%.
I also have this problem. Have you finally resolved it
Hi,
I am trying to reproduce the mentioned 86.6 percent testing accuracy in the paper on the NTU dataset in cross-view settings. I have two GTX 1080 ti GPUs, and I am not able to achieve more than 78 percent of testing accuracy with the provided script file. I would be really appreciated if you could help me running your code with the appropriate parameters.
Thanks