Closed githubBingoChen closed 6 years ago
Okay I'll upload my latest model. But I am not sure this is well - fitted or not. I am testing my models by using some metrics like Precision-Recall.
since my model file is too big(>25MB) to upload on github, I'll link the download link.
https://drive.google.com/drive/folders/1s4M-_SnCPMj_2rsMkSy3pLnLQcgRakAe?usp=sharing
Thank you very much. Would it be possible for you to provide the testing results of PiCANet ?
I found that my PiCANet implementation makes little bit worse performance than the PiCANet paper, but if you want the testing results, I'll upload them soon. And also, I am going to implement some code to check the attention map. Thank you for your interest.
Now you can test PiCANet model by using Image_Test.py. You can see the result by using tensorboard. If you are not familiar with tensorboard, I will add file output code. And also I am trying to make guideline and some execution commands by using argument parsing. But I have to study argparse module because I haven't use that. So it may take time. Sorry for that.
If you run Image_Test.py or some other codes with pretrained model, You would see SourceChangeWarning. But you can just ignore the warning. That warning came from changing PiCANet's forward code(added Test mode). It will not make any error.
I updated Readme.md and uploaded Execution Guidelines. But It is based on my programming environment. So it would not work. If it does not work, feel free to make an issue or mail to me. wogns98@kaist.ac.kr And also, if you downloaded the model file, please download it again. I modified save and load code a little, and old version of model file would not work with the code.
Would you mind to report your reimplement performance score?
No, it's okay. I measured my model's performance by F-score (precision recall), and the result was 0.7865. with 25epo_210000step.ckpt. It is worse than Paper's one. If you want to know my model's performance with other metrics, please give me some suggestion. I'll try them. And please let me know if there is any wrong implementation in my model or performance metric.
Detailed performance score : (F-score with beta_square = 0.3) Step | Value |
---|---|
10000 | 0.710664 |
20000 | 0.75468 |
30000 | 0.742792 |
40000 | 0.769039 |
50000 | 0.773668 |
60000 | 0.771476 |
70000 | 0.783399 |
80000 | 0.755955 |
90000 | 0.759409 |
110000 | 0.751761 |
120000 | 0.682074 |
130000 | 0.740484 |
140000 | 0.736662 |
150000 | 0.75439 |
160000 | 0.717181 |
170000 | 0.737753 |
180000 | 0.723462 |
190000 | 0.780328 |
200000 | 0.725279 |
210000 | 0.786514 |
Thanks a lot for your quick response and patient explanation~
Could you upload your pretrained model and test code?