QiQAng / UEDVC

9 stars 2 forks source link

How to get a SODA score #2

Open riariam opened 1 year ago

riariam commented 1 year ago

Hello,I'm currently testing your code on the activityNet dataset, specifically the vc task. During the run, I noticed that the only evaluation metrics provided are meteor, cider, and bleu. However, I also came across the SODA score in your paper, and I'm wondering how I can obtain this metric from the code. Could you please let me know how to obtain the SODA score? Thank you in advance for your help.

QiQAng commented 1 year ago

The SODA metrics can be available at https://github.com/fujiso/SODA. We calculated it after the inference was completed.

riariam commented 1 year ago

The SODA metrics can be available at https://github.com/fujiso/SODA. We calculated it after the inference was completed.

Thank you for your response. I have another question. I tried to run your code. Is the vc task in your code corresponding to the 'Table 3. Event captioning results on the ActivityNet Captions validation set' in the original paper? I did not change any settings in your code, and the run parameters were set to '../data/model_pretrain.json ../data/path_pretrain.json --eval_set 'val' --is_train'. After 50 rounds, I obtained the following metric results: [val step 50000: meteor 0.095722 cider 0.314167 bleu_4 0.043317 bleu_3 0.069875 bleu_2 0.122933 bleu_1 0.240579]. However, these values are very different from the results provided in your paper [meteor 11.43 cider 54.75 bleu_4 2.90]. I am not sure where the problem lies in the settings. Could you provide some help for me? Thank you again for your response.

riariam commented 1 year ago

Thank you for your response.  I have another question. I tried to run your code. Is the vc task in your code corresponding to the 'Table 3. Event captioning results on the ActivityNet Captions validation set' in the original paper?  I did not change any settings in your code, and the run parameters were set to '../data/model_pretrain.json ../data/path_pretrain.json --eval_set 'val' --is_train'. After 50 rounds, I obtained the following metric results: [val step 50000: meteor 0.095722 cider 0.314167 bleu_4 0.043317 bleu_3 0.069875 bleu_2 0.122933 bleu_1 0.240579]. However, these values are very different from the results provided in your paper [meteor 11.43 cider 54.75 bleu_4 2.90]. I am not sure where the problem lies in the settings.  Could you provide some help for me? Thank you again for your response.

------------------ 原始邮件 ------------------ 发件人: "QiQAng/UEDVC" @.>; 发送时间: 2023年5月11日(星期四) 下午5:37 @.>; @.**@.>; 主题: Re: [QiQAng/UEDVC] How to get a SODA score (Issue #2)

The SODA metrics can be available at https://github.com/fujiso/SODA. We calculated it after the inference was completed.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

QiQAng commented 1 year ago

1.The VC task corresponds to the ‘Ground-Truth proposals’ column. When the event modality in the VC task inputs the predicted result of the ED task, it will correspond to the ‘generated proposals’ column. 2.The metrics during training stage are only used to select the best model for inference, the final metrics will be calculated by the tools(https://github.com/ranjaykrishna/densevid_eval).
Thank you for your interest in our work.At present, I am fully involved in the related matters of graduation, and I will reorganize the code and checkpoint in my spare time afterwards.