Closed zhang197 closed 1 year ago
Yes, I don't have a separate mask. You can look into my other repo here to implement a few other GCNs; some of the models in that repo have such masks.
It's been a while since I implemented this; I believe I got better results with BatchNorm instead of Dropout for regularization.
Finally, yes, I had the same issue with the Kinetics dataset; I wasn't sure what the issue was and decided to skip it. Perhaps double-check the results with the original implementation and see if all the model hyperparameters match.
Thank you very much for your reply!:) I will reconfirm whether the hyperparameters parameters match
Hello, Thank you very much for sharing the code based on tensorflow framework ! I have some questions to ask you for advice.
if edge_importance_weighting: self.edge_importance = nn.ParameterList([ nn.Parameter(torch.ones(self.A.size())) for i in self.st_gcn_networks ]) I've noticed that you don't seem to use mask in your code. Am I correct?
Secondly, when the official code uses NTU rgbd data set, the dropout of the network is set to 0.5. It seems that dropout is not used in your code. Is there any special reason why you don't set dropout.
Finally, I use your code to try to get the model trained with Kinetics-skeleton dataset, but the classification accuracy of Top1 and Top5 is almost 10% lower than the official results. (I use the official .npy dataset and use your gen tfrecord data. py changes the format of the data and changes the number of classes in the code from 60 to 400)
Do you know the reason or can you give me some advice.