Hey can you please elaborate how did you find recall@3. The cosine similarity between true distractors and the predicted distractors will lie between 0 and 1. Please elaborate how did you convert this fraction into a number where your recall@3=12.98
Please let me know if i have understood wrongly.
Let me elaborate my doubt:
For example original distractors where = ['red', 'black', 'blue']
And Predicted distractor is =['red','yellow','green']
Then the cosine similarity would be (returned values from word2vec similarity function): [1,0.8,0.5]
Similarly for n generated questions you get n such lists of length=3
That is: [
[1,0.8,0.5],
[0.7,0,0.04],
[0.3,0.8,0.2],
*
[0.2,0.4,0.6]
]
Now how did you calculate recall@3 or precision@3 ??
@DRSY
Sorry for the late reply.
Precision@k: num of ground truth distractors in top-l prediction / k.
Recall@k: num of ground truth distractors in top-k prediction / num of ground truth distractors in total prediction
Hey can you please elaborate how did you find recall@3. The cosine similarity between true distractors and the predicted distractors will lie between 0 and 1. Please elaborate how did you convert this fraction into a number where your recall@3=12.98 Please let me know if i have understood wrongly. Let me elaborate my doubt: For example original distractors where = ['red', 'black', 'blue'] And Predicted distractor is =['red','yellow','green'] Then the cosine similarity would be (returned values from word2vec similarity function): [1,0.8,0.5] Similarly for n generated questions you get n such lists of length=3 That is: [ [1,0.8,0.5], [0.7,0,0.04], [0.3,0.8,0.2], * [0.2,0.4,0.6] ] Now how did you calculate recall@3 or precision@3 ?? @DRSY