lshiwjx / 2s-AGCN

Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition in CVPR19
Other
662 stars 180 forks source link

how to visualize the Adjacency matrix? #106

Open HeiHeiCCC opened 1 year ago

HeiHeiCCC commented 1 year ago

In your paper, you mentioned about visualization of skeleton graph for different layers of samples such as 3th, 5th, and 7th layers. Could you point me where that part in the code?

MBuxel commented 1 year ago

Hey, the global skeleton graph of your trained model of each layer is saved in the model. So if you want to plot the different graphs for the subsets, you need to go to model.l1.gcn1.PA of your trained model, l1 is representing the first Layer and so on. Then just plot it with matplotlib.

HeiHeiCCC commented 1 year ago

@MBuxel Thanks for your reply, I now know how to visualize the adjacency matrix, but my results look strange, not as good as the results in the paper, and even in the high level network the results are reversed, it seems that the network has not learned anything. But I'm getting close to 95% accuracy. I don't know what's going on here. It's weird.

MBuxel commented 1 year ago

Hmm can't really help you there. I'm using the network with a different dataset and my model learns a new matrix, but also not as good as the results in the paper.

jianlai123-123 commented 1 year ago

Hello, the adaptive graph convolution in the paper is structurally similar to the attention structure in Transformer, because I have seen a formula given in other papers: h=Attention (Qi, Ki, Vi)+A+ Ψ, I ∈{1,..., S}. The code is the same as this paper. The transformer obtains global information between joint points. May I ask if this adaptive graph convolution also yields global information? If it is not global information, is it local information obtained? Is it about aggregating adjacent contact information? Is it like traditional GCN that aggregates information from adjacent points? Looking forward to your reply. Thank you.@HeiHeiCCC