ASMIftekhar / VSGNet

VSGNet:Spatial Attention Network for Detecting Human Object Interactions Using Graph Convolutions.
MIT License
100 stars 20 forks source link

Hi, I have a question about the gap between the actual trained indicator and the indicator in the paper #22

Closed bitwangdan closed 2 years ago

bitwangdan commented 3 years ago

hi,I have trained many times and the best indicator is Mean:54.94%,scenario_1:51.08%,scenario_2::56.13%,It is lower than the indicator in the paper (51.76% 57.03%),Did I do something wrong? Can you give me some suggestions? thanks!

ASMIftekhar commented 3 years ago

Hello, I am assuming you followed all the instructions from readme. I do not think something is wrong as the result is not really off that much( 0.69%). Whatever I ran is in this repository. Therefore I do not really have an answer to your question. If you are very eager to reproduce the exact value then may be you can have several runs with different seed values main.py and check the average of different runs along with variance. However, I think there can always be some small differences in results among various systems.