oraclefina / GSGNet

This project provides the code for 'Global Semantic-Guided Network for Saliency Prediction', Knowledge-Based Systems.
4 stars 0 forks source link

About inference #1

Closed liaochuanlin closed 4 months ago

liaochuanlin commented 5 months ago

Dear author, I have a question. Is the quantitative result obtained in SALICON the result of the verification set? image image Are the test results of MIT300 submitted to https://saliency.tuebingen.ai/? I'm sorry I just got in touch with the research in this area, but I don't know much about it.

oraclefina commented 5 months ago

For SALICON, you can obtain your testset scores from https://codalab.lisn.upsaclay.fr/competitions/8379.
For MIT300, submit your predictions to the website and you will receive your results from the website by email.

liaochuanlin commented 5 months ago

Thank you for your answer. Do you have a download window for TORONTO dataset? I searched for a long time but couldn't find it, and I didn't find the data import code such as TORONTO,MIT300 in my data.py. Can I make it public? Thank you again for your reply.

oraclefina commented 5 months ago

You just store filenames of images and gt maps as the csv files like the SALICON ones under ./dataset. The data loading process is the same. TORONTO: http://www-sop.inria.fr/members/Neil.Bruce/eyetrackingdata.zip

liaochuanlin commented 5 months ago

Dear author, how do you divide the data of TORONTO and PASCAL-S?  I don't have the proportion of them in your paper.  Thank you very much for your reply.

廖川林 @.***

 

------------------ 原始邮件 ------------------ 发件人: "oraclefina/GSGNet" @.>; 发送时间: 2024年4月9日(星期二) 晚上9:15 @.>; @.**@.>; 主题: Re: [oraclefina/GSGNet] About inference (Issue #1)

You just store filenames of images and gt maps as the csv files like the SALICON ones under ./dataset. The data loading process is the same. TORONTO: http://www-sop.inria.fr/members/Neil.Bruce/eyetrackingdata.zip

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

oraclefina commented 5 months ago

We use the entire datasets of TORONTO and PASCAL-S for evaluation. Thus, there is no split in those two datasets.

liaochuanlin commented 5 months ago

Okay, got it. Thank you for your answer. Thank you very much again.

@.***

 

------------------ 原始邮件 ------------------ 发件人: "oraclefina/GSGNet" @.>; 发送时间: 2024年4月11日(星期四) 晚上8:08 @.>; @.**@.>; 主题: Re: [oraclefina/GSGNet] About inference (Issue #1)

We use the entire datasets of TORONTO and PASCAL-S for evaluation. Thus, there is no split in those two datasets.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

liaochuanlin commented 5 months ago

image Dear author, I hope this news will make you feel good. Is this code calculated by calling matlab with python? If you are willing to share the code that calculates these indicators, my email address is 2651106983@qq.com. I'm so sorry to bother you.

liaochuanlin commented 5 months ago

Dear author, I am very sorry to bother you: sAUC,IG,NSS in the TORONTO dataset, the result I calculated is very different from that of the paper, and the IG even becomes negative. The code for calculating the index is as follows: is there something wrong with this method of calculation? Sorry to bother you again! image image

oraclefina commented 5 months ago

The python codes of metrics are in loss.py. You can directly use it. I set 352x352 in center_bias for baseline in IG.

liaochuanlin commented 5 months ago

image image image Dear author, thank you for your reply. According to your hint, I will change the code as shown in the figure, and the final calculated ig is-6.7237. Is there a problem with the implementation of the above code? should the prediction map and truth map be scaled to the same size as baseline (352x352)? Look forward to your reply

liaochuanlin commented 5 months ago

Dear author, does the PASCAL-S dataset only have element-by-element truth marker graphs?

oraclefina commented 5 months ago

I align my predictions with the original resolutions of gt maps before obtaining my prediction PNG FILES , eg. TORONTO is 511,681 and PASCAL-S has different shapes. The following example is one image evaluation:

#Load Data
pred = cv2.imread(pred_path,cv2.IMREAD_GRAYSCALE) / 255.0
gt = cv2.imread(gt_path,cv2.IMREAD_GRAYSCALE) / 255.0
fix = cv2.imread(FixationMaps_path,cv2.IMREAD_GRAYSCALE)
fix = np.round(fix/255.0)
#To Torch.Tensor with Batch Dim
pred = torch.FloatTensor(pred)[None,:,:]
gt = torch.FloatTensor(gt)[None,:,:]
fix = torch.FloatTensor(fix)[None, :, :]
sp = fix.shape
baseline = torch.FloatTensor(center_bias((sp[1],sp[2])))[None, :, :]
#Compute scores
ig_score = ig(pred, gt, baseline)
oraclefina commented 5 months ago

If you mean the fixation density maps, you can find under ./algmaps/pascal/humanFix.

liaochuanlin commented 5 months ago

The data I downloaded only contains image and mask. Can you share the data set you downloaded? image

liaochuanlin commented 5 months ago

Dear author, are the gt and fix in this code the same picture? image

oraclefina commented 5 months ago

PASCAL-S: https://academictorrents.com/details/6c49defd6f0e417c039637475cde638d1363037e gt is the fixation density map, fix is the fixation map with binary values.

liaochuanlin commented 5 months ago

Dear author, after I get the picture that is predicted to be aligned with the original resolution of the GT map, the TORONTO evaluation results obtained according to the calculation method in loss are as follows: auc,nss,cc,sim,kl,sauc,ig 0.8578, 1.8485, 0.7518, 0.6201 , 0.5339 , 0.7226, 0.7828 nss and ig there is still a certain gap between the article and the article, and will the aligned baseline still be the 352x352 resolution?

liaochuanlin commented 5 months ago

Dear author, are there any precautions for submitting the mit300 test results to saliency@tuebingen.ai this mailbox? It has been submitted for a long time without feedback. image

liaochuanlin commented 5 months ago

Dear author, when I replace auc_judd1 = auc_judd (s_map, gt) with auc_judd1 = auc_judd (s_map, fix), the auc result becomes 0.95. the jump is too big.

oraclefina commented 5 months ago

What is the model used for inference? You can share a Google drive link and so I can evaluate it. It took a long time for me to get the results from MIT300. If you give them log density predictions, you will obtain higher scores on some metrics, since their evaluation method optimizes predictions for each metric.

liaochuanlin commented 5 months ago

Dear author, I use the model trained by SALICON for reasoning. The weight of the model and the reasoning results are in the following link: https://drive.google.com/drive/folders/1-YnFzOFVhNby5JSE4Bm91vMBztkWFH55?usp=sharing

oraclefina commented 5 months ago

I evaluated your predictions. The results are normal, NSS=2.1646383, IG=0.9210529. What did you use for fixation maps? TORONTO has origfixdata.mat, which contains the ground-truth binary maps for metrics like NSS and IG.

for i,fix in enumerate(fixmat):
    cv2.imwrite("./{}.png".format(i+1),fix * 255)
ig_score = ig(pred, fix, baseline) # It should use fixation maps.
liaochuanlin commented 5 months ago

I use the fix obtained by discretization of the pictures in / media/test_lcl/GSGNet/dataset/TORONTO/eyetrackingdata/fixdens/Density Maps produced from raw experimental eyetrackingdata. Can you share with me the detailed processing of the origfixdata.mat file and generate the fix code?

oraclefina commented 5 months ago

Use scipy.io.loadmat to load the file, and then obtain values by keys. Make sure that the evaluation files are matched by filenames.

liaochuanlin commented 5 months ago

Dear author, what is the acc value you get from the prediction I gave you? I have your hint that among the indicators of auc,nss,cc,sim,kl,sauc,ig, only auc has a gap of 0.8655 between auc and the paper. Does the gt here also use fix? Thank you very much for your patience

oraclefina commented 5 months ago

Use fix. Recommend reading "What do different evaluation metrics tell us about saliency models?."

liaochuanlin commented 4 months ago

Thank you for your patient reply. I can basically reproduce the results of the TORONTO dataset. Thank you very much!

liaochuanlin commented 1 month ago

Dear Author, PASCAL-S: https://academictorrents.com/details/6c49defd6f0e417c039637475cde638d1363037e cannot be opened Where do you have PASCAL -s dataset. / algmaps/PASCAL/humanFix. Did you? Could you send it to my email address if it is convenient? Thank you so much!!