QinYang79 / DECL

Deep Evidential Learning with Noisy Correspondence for Cross-modal Retrieval ( ACM Multimedia 2022, Pytorch Code)
37 stars 5 forks source link

some confusion about the evidential learning of the paper #4

Closed EricPaul03 closed 5 months ago

EricPaul03 commented 5 months ago

Because I have just come into contact with the theory of evidential learning, is your paper explaining that for example, such as t2i optimization is actually done separately for each query, and for each query in the current batch, it is enough to create a K-class output result (K is the number of all images in the batch)? That is to say, for each query, each position of the evidence vector, e11, e2..., is assigned a corresponding similarity mapping value. For this K-dimensional evidence vector, each position is only collected once for evidence. What is the reason why this works?

EricPaul03 commented 5 months ago

Looking forward to your answer to solve my confusion, thank you so much.

QinYang79 commented 5 months ago

Looking forward to your answer to solve my confusion, thank you so much.

For the evidence theory, evidence-related events should be independent of each other. Inspired by EDL, we treat the K retrievals of each query as independent retrieval events (Similar to classification, as long as I classify it correctly, the retrieval is correct). To this end, these evidences can naturally be transformed from corresponding similarities. Based on subjective logic theory, we conducted subsequent modeling. For more on EDL, you can refer to “Evidential Deep Learning to Quantify Classification Uncertainty” and “A survey on evident deep learning for single-pass uncertainty estimation”.

EricPaul03 commented 5 months ago

Evidential Deep Learning to Quantify Classification Uncertainty

Thank you very much for your answer. I have another question. Aren't the losses related to EDL and section 3.4 in the paper duplicated? I understand that the loss optimization objective of RDH is to approach the ground truth label.

QinYang79 commented 5 months ago

Evidential Deep Learning to Quantify Classification Uncertainty

Thank you very much for your answer. I have another question. Aren't the losses related to EDL and section 3.4 in the paper duplicated? I understand that the loss optimization objective of RDH is to approach the ground truth label.

Although they all have similar goals for optimization, we found that combining them resulted in better performance. Thanks.

EricPaul03 commented 5 months ago

Evidential Deep Learning to Quantify Classification Uncertainty

Thank you very much for your answer. I have another question. Aren't the losses related to EDL and section 3.4 in the paper duplicated? I understand that the loss optimization objective of RDH is to approach the ground truth label.

Although they all have similar goals for optimization, we found that combining them resulted in better performance. Thanks.

Okay, thank you for your reply. It will greatly help me understand your work