Ugness / PiCANet-Implementation

Pytorch Implementation of PiCANet: Learning Pixel-wise Contextual Attention for Saliency Detection
MIT License
178 stars 40 forks source link

pretrained model and test code #2

Closed githubBingoChen closed 6 years ago

githubBingoChen commented 6 years ago

Could you upload your pretrained model and test code?

Ugness commented 6 years ago

Okay I'll upload my latest model. But I am not sure this is well - fitted or not. I am testing my models by using some metrics like Precision-Recall.

Ugness commented 6 years ago

since my model file is too big(>25MB) to upload on github, I'll link the download link.

https://drive.google.com/drive/folders/1s4M-_SnCPMj_2rsMkSy3pLnLQcgRakAe?usp=sharing

githubBingoChen commented 6 years ago

Thank you very much. Would it be possible for you to provide the testing results of PiCANet ?

Ugness commented 6 years ago

I found that my PiCANet implementation makes little bit worse performance than the PiCANet paper, but if you want the testing results, I'll upload them soon. And also, I am going to implement some code to check the attention map. Thank you for your interest.

Ugness commented 6 years ago

Now you can test PiCANet model by using Image_Test.py. You can see the result by using tensorboard. If you are not familiar with tensorboard, I will add file output code. And also I am trying to make guideline and some execution commands by using argument parsing. But I have to study argparse module because I haven't use that. So it may take time. Sorry for that.

If you run Image_Test.py or some other codes with pretrained model, You would see SourceChangeWarning. But you can just ignore the warning. That warning came from changing PiCANet's forward code(added Test mode). It will not make any error.

Ugness commented 6 years ago

I updated Readme.md and uploaded Execution Guidelines. But It is based on my programming environment. So it would not work. If it does not work, feel free to make an issue or mail to me. wogns98@kaist.ac.kr And also, if you downloaded the model file, please download it again. I modified save and load code a little, and old version of model file would not work with the code.

Junjun2016 commented 6 years ago

Would you mind to report your reimplement performance score?

Ugness commented 6 years ago

No, it's okay. I measured my model's performance by F-score (precision recall), and the result was 0.7865. with 25epo_210000step.ckpt. It is worse than Paper's one. If you want to know my model's performance with other metrics, please give me some suggestion. I'll try them. And please let me know if there is any wrong implementation in my model or performance metric.

Ugness commented 6 years ago
Detailed performance score : (F-score with beta_square = 0.3) Step Value
10000 0.710664
20000 0.75468
30000 0.742792
40000 0.769039
50000 0.773668
60000 0.771476
70000 0.783399
80000 0.755955
90000 0.759409
110000 0.751761
120000 0.682074
130000 0.740484
140000 0.736662
150000 0.75439
160000 0.717181
170000 0.737753
180000 0.723462
190000 0.780328
200000 0.725279
210000 0.786514
githubBingoChen commented 6 years ago

Thanks a lot for your quick response and patient explanation~