Open unnormalization opened 2 years ago
Did you get the same results as the paper when you ran the entailment_retrieval.ipynb ?
Did you get the same results as the paper when you ran the entailment_retrieval.ipynb ?
I can't get the same results, it seems that the results worse that the baseline of the EntailmentWriter
No,I got a much worse result than baseline of the EntailmentWriter, I think his paper data is falsified.
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2023年4月25日(星期二) 下午2:04 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [amazon-science/irgr] Asking for the retrieval results & Question about the @.*** metric (Issue #2)
Did you get the same results as the paper when you ran the entailment_retrieval.ipynb ?
I can't get the same results, it seems that the results worse that the baseline of the EntailmentWriter
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
No,I got a much worse result than baseline of the EntailmentWriter, I think his paper data is falsified. … ------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2023年4月25日(星期二) 下午2:04 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [amazon-science/irgr] Asking for the retrieval results & Question about the @. metric (Issue #2) Did you get the same results as the paper when you ran the entailment_retrieval.ipynb ? I can't get the same results, it seems that the results worse that the baseline of the EntailmentWriter — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.>
Hi! Could you find other method that could get a better result that the baseline?
我还以为你老外呢,比baseline更好的检索方法还没找到,咱俩加个微信聊吧,wxr199002
Hi! Thanks for your great work and your code! I have a question about the Recall@25 metric (the
compute_retrieval_metrics
function in entailment_retrieval.ipynb).It seems that you calculate the recall by
recall = tot_sent_correct / float(tot_sent)
:$$ {\bf Recall} = \frac {\bf the\ number\ of\ TP(true\ positive)\ of\ all\ samples} {\bf the\ number\ of\ retrieved\ sentences\ of\ all\ samples}. $$
However, I want to calculate the recall in the following way:
$$ {\bf Recall} = {\frac 1 N} \sum_{N\ {\bf samples}} \frac {\bf the\ number\ of\ TP\ of\ one\ sample} {\bf the\ number\ of\ retrieved\ sentences\ of\ one\ sample}. $$
Could you please share your retrieval results (corresponding to the Table 1 in your paper)? They would help me a lot since I can calculate my own recall metric.