Closed lerogo closed 1 year ago
Hi Yang, Thank you for your excellent work. I would like to ask how the formula 3 in your article was obtained. It does not change the monotonicity of the similarity function, but only plays a smoothing role. Have you tried other methods? And I don't quite understand the query probabilities pi in formula 5 in your article, can you please explain it to me? Thanks again for your work!
Best regards, Hailang Huang
The formula 3: The range of evidence is generally [0,+∞), and the similarity is generally [-1,1] or [0,1]. When expanding SGRAF, the published code replaces the sigmoid used to calculate the correlation with tanh, which is [-1,1], and then uses the exponent to simulate non-negative as much as possible. You can try others like ReLU(s)/𝜏.
The formula 5: We can think of the retrieval process as instance-level classification (instances are considered as image-text pairs), wherein the classification probability is called the query probability because it comes from the same query sample. For more of eq.(5), you can refer to “Evidential Deep Learning to Quantify Classification Uncertainty” and “A survey on evident deep learning for single-pass uncertainty estimation”
Thank you for your reply!
Hi Yang, Thank you for your excellent work. I would like to ask how the formula 3 in your article was obtained. It does not change the monotonicity of the similarity function, but only plays a smoothing role. Have you tried other methods? And I don't quite understand the query probabilities $p_i$ in formula 5 in your article, can you please explain it to me? Thanks again for your work!
Best regards, Hailang Huang