LauJames / PAT

Imitation Adversarial Attacks for Black-box Neural Ranking Models
Apache License 2.0
11 stars 3 forks source link

no MRR@10 key in the results['model']['eval'][q][metric]. #3

Open Gerry-j opened 4 weeks ago

Gerry-j commented 4 weeks ago

executing run_pairwise_ranker.py【trainer.train_ranker(mode=args.mode)】 --> when I executed metrics.evaluate_and_aggregate() in the network_trainer.py file,I noticed that the variable results['model']['eval'][q] does not include MRR@10,Only 'ndcg_cut_10', 'map', and 'recip_rank' have values,could you please tell me how to solve it?

Gerry-j commented 4 weeks ago

executing run_pairwise_ranker.py【trainer.train_ranker(mode=args.mode)】 --> when I executed metrics.evaluate_and_aggregate() in the network_trainer.py file,I noticed that the variable results['model']['eval'][q] does not include MRR@10,Only 'ndcg_cut_10', 'map', and 'recip_rank' have values,could you please tell me how to solve it?

This is a part of results, you could see no MRR@10 key: 'q105': {'map': 0.5, 'recip_rank': 0.5, 'ndcg_cut_5': 0.6309297535714575, 'ndcg_cut_10': 0.6309297535714575, 'ndcg_cut_15': 0.6309297535714575, 'ndcg_cut_20': 0.6309297535714575, 'ndcg_cut_30': 0.6309297535714575, 'ndcg_cut_100': 0.6309297535714575, 'ndcg_cut_200': 0.6309297535714575, 'ndcg_cut_500': 0.6309297535714575, 'ndcg_cut_1000': 0.6309297535714575

LauJames commented 4 weeks ago

Dear Gerry-j,

I'm sorry for the confusion. The scores obtained from network_trainer.py are used to select the best model.

Our standard MRR and NDCG scores are calculated separately based on the output runs files. Kindly use trec_eval_tools.py to obtain accurate results.

Best regards, James

在 2024-08-16 11:23:13,"Gerry-j" @.***> 写道:

executing run_pairwise_ranker.py【trainer.train_ranker(mode=args.mode)】 --> when I executed metrics.evaluate_and_aggregate() in the network_trainer.py file,I noticed that the variable results['model']['eval'][q] does not include @.***,Only 'ndcg_cut_10', 'map', and 'recip_rank' have values,could you please tell me how to solve it?

This is a part of results, you could see no @.*** key: 'q105': {'map': 0.5, 'recip_rank': 0.5, 'ndcg_cut_5': 0.6309297535714575, 'ndcg_cut_10': 0.6309297535714575, 'ndcg_cut_15': 0.6309297535714575, 'ndcg_cut_20': 0.6309297535714575, 'ndcg_cut_30': 0.6309297535714575, 'ndcg_cut_100': 0.6309297535714575, 'ndcg_cut_200': 0.6309297535714575, 'ndcg_cut_500': 0.6309297535714575, 'ndcg_cut_1000': 0.6309297535714575

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.***>

Gerry-j commented 4 weeks ago

Dear Gerry-j, I'm sorry for the confusion. The scores obtained from network_trainer.py are used to select the best model. Our standard MRR and NDCG scores are calculated separately based on the output runs files. Kindly use trec_eval_tools.py to obtain accurate results. Best regards, James 在 2024-08-16 11:23:13,"Gerry-j" @.> 写道: executing run_pairwise_ranker.py【trainer.train_ranker(mode=args.mode)】 --> when I executed metrics.evaluate_and_aggregate() in the network_trainer.py file,I noticed that the variable results['model']['eval'][q] does not include @.,Only 'ndcg_cut_10', 'map', and 'recip_rank' have values,could you please tell me how to solve it? This is a part of results, you could see no @. key: 'q105': {'map': 0.5, 'recip_rank': 0.5, 'ndcg_cut_5': 0.6309297535714575, 'ndcg_cut_10': 0.6309297535714575, 'ndcg_cut_15': 0.6309297535714575, 'ndcg_cut_20': 0.6309297535714575, 'ndcg_cut_30': 0.6309297535714575, 'ndcg_cut_100': 0.6309297535714575, 'ndcg_cut_200': 0.6309297535714575, 'ndcg_cut_500': 0.6309297535714575, 'ndcg_cut_1000': 0.6309297535714575 — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.>

I am sorry that my ability is so weak and I cannot complete the training of the surrogate model. Could you please send me the model parameters of the surrogate model? This is my email address :3240584315@qq.com,Thank you!

LauJames commented 3 weeks ago

Thanks for your interest.

I have uploaded the model weights to Google Cloud Drive:https://drive.google.com/drive/folders/1uh3425A_ZHP79ZE3-UhnhwDZwWW57jx8?usp=sharing

You can load these models based on open-source code for subsequent tasks :)

'imitate.v1' and 'imitate.v2' are variants in the codes. 'Imitation.bert_large..pth' and 'Imitation.MiniLM..pth' correspond to the ‘imitate.v1’ and ‘imitate.v2’.

If you want to load the model correctly, please ensure the model path is correct.

Best wishes,

Jiawei Liu Wuhan University

At 2024-08-16 15:14:04, "Gerry-j" @.***> wrote:

Dear Gerry-j, I'm sorry for the confusion. The scores obtained from network_trainer.py are used to select the best model. Our standard MRR and NDCG scores are calculated separately based on the output runs files. Kindly use trec_eval_tools.py to obtain accurate results. Best regards, James 在 2024-08-16 11:23:13,"Gerry-j" @.> 写道: executing run_pairwise_ranker.py【trainer.train_ranker(mode=args.mode)】 --> when I executed metrics.evaluate_and_aggregate() in the network_trainer.py file,I noticed that the variable results['model']['eval'][q] does not include @.,Only 'ndcg_cut_10', 'map', and 'recip_rank' have values,could you please tell me how to solve it? This is a part of results, you could see no @. key: 'q105': {'map': 0.5, 'recip_rank': 0.5, 'ndcg_cut_5': 0.6309297535714575, 'ndcg_cut_10': 0.6309297535714575, 'ndcg_cut_15': 0.6309297535714575, 'ndcg_cut_20': 0.6309297535714575, 'ndcg_cut_30': 0.6309297535714575, 'ndcg_cut_100': 0.6309297535714575, 'ndcg_cut_200': 0.6309297535714575, 'ndcg_cut_500': 0.6309297535714575, 'ndcg_cut_1000': 0.6309297535714575 — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.>

I am sorry that my ability is so weak and I cannot complete the training of the surrogate model. Could you please send me the model parameters of the surrogate model? This is my email address @.***,Thank you!

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

Gerry-j commented 3 weeks ago

Thank you for sharing, and I wish you all the best in life.

------------------ 原始邮件 ------------------ 发件人: "LauJames/PAT" @.>; 发送时间: 2024年8月17日(星期六) 下午3:29 @.>; @.**@.>; 主题: Re: [LauJames/PAT] no @.*** key in the results['model']['eval'][q][metric]. (Issue #3)

Thanks for your interest.

I have uploaded the model weights to Google Cloud Drive:https://drive.google.com/drive/folders/1uh3425A_ZHP79ZE3-UhnhwDZwWW57jx8?usp=sharing

You can load these models based on open-source code for subsequent tasks :)

'imitate.v1' and 'imitate.v2' are variants in the codes. 'Imitation.bert_large..pth' and 'Imitation.MiniLM..pth' correspond to the ‘imitate.v1’ and ‘imitate.v2’.

If you want to load the model correctly, please ensure the model path is correct.

Best wishes,

Jiawei Liu Wuhan University

At 2024-08-16 15:14:04, "Gerry-j" @.***> wrote:

Dear Gerry-j, I'm sorry for the confusion. The scores obtained from network_trainer.py are used to select the best model. Our standard MRR and NDCG scores are calculated separately based on the output runs files. Kindly use trec_eval_tools.py to obtain accurate results. Best regards, James 在 2024-08-16 11:23:13,"Gerry-j" @.> 写道: executing run_pairwise_ranker.py【trainer.train_ranker(mode=args.mode)】 --> when I executed metrics.evaluate_and_aggregate() in the network_trainer.py file,I noticed that the variable results['model']['eval'][q] does not include @.,Only 'ndcg_cut_10', 'map', and 'recip_rank' have values,could you please tell me how to solve it? This is a part of results, you could see no @. key: 'q105': {'map': 0.5, 'recip_rank': 0.5, 'ndcg_cut_5': 0.6309297535714575, 'ndcg_cut_10': 0.6309297535714575, 'ndcg_cut_15': 0.6309297535714575, 'ndcg_cut_20': 0.6309297535714575, 'ndcg_cut_30': 0.6309297535714575, 'ndcg_cut_100': 0.6309297535714575, 'ndcg_cut_200': 0.6309297535714575, 'ndcg_cut_500': 0.6309297535714575, 'ndcg_cut_1000': 0.6309297535714575 — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.>

I am sorry that my ability is so weak and I cannot complete the training of the surrogate model. Could you please send me the model parameters of the surrogate model? This is my email address @.***,Thank you!

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.> — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.>

LauJames commented 3 weeks ago

''在sample_runs.py中 需要的文件【bert_ranker/results/runs/runs.bert-large-uncased.public.bert.msmarco.eval_full_dev1000.csv】是由dev_public_bert_ranker.py生成的吗??''

yes!

在 2024-08-21 19:36:13,"Gerry-j" @.***> 写道:

您好, 我把dev_public_bert_ranker.py文件生成的runs.bert-large-uncased.public.bert.msmarco.Wed_Aug_21.dl2019.csv改成了XXX.msmarco.eval_full_dev1000.csv,运行sample_runs.py会报错,我猜可能是我改csv文件的做法不对?因此前来咨询,在sample_runs.py中 需要的文件【bert_ranker/results/runs/runs.bert-large-uncased.public.bert.msmarco.eval_full_dev1000.csv】是由dev_public_bert_ranker.py生成的吗??

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>