cancam / LRP

Localization Recall Precision Performance Metric toolkit for PASCAL-VOC, COCO datasets with Python and MATLAB implementations.
MIT License
63 stars 12 forks source link

Question: Why FNError and FPError divided by 'l'? #10

Closed priteshgohil closed 3 years ago

priteshgohil commented 3 years ago

I was tryting to understand the code and wondering why FNError and FPError are divided by l. Shouldn't it be only nhatand mhat according to equation? https://github.com/cancam/LRP/blob/f6157cea24eef7bb633d76011d6fcc24b6283c2b/cocoLRPapi-master/PythonAPI/pycocotools/cocoevalLRP.py#L332

Also if there is specific reason, why FNError is divided by the same l?

I appreciate a lot If you can explain this LRP calculation part.

priteshgohil commented 3 years ago

So I have found out that this LRP calculation part is the simplified version after substituting all values as in eq (5) of the original paper. But each component Localization, mhatand nhatis divided by l and then again multiplied by l using Z component in the code. I find this step overhead and confusing. This could have been simply written with mhat, nhat and omega.

Edit: Ok, I found it, It's in supplementary paper equation 3 with p value is 1.

kemaloksuz commented 3 years ago

I totally agree that this code is not easily readable. We have been working on another version of the code for a while, now it is completed and I havee just pushed it to another repo. I tested it with your previous inputs and seems working. I believe this can help you more.

So please check the cocoeval.py in the following link. It should work when you just replace cocoevalLRP.py by the code in this file:

https://github.com/kemaloksuz/LRP-Error/blob/master/pycocotools/pycocotools/cocoeval.py

Also please see the following link to debug and see how LRP, oLRP works:

https://github.com/kemaloksuz/LRP-Error/blob/0856957e15c346b342ca4fe26cfb8eb427b77f47/pycocotools/pycocotools/cocoeval.py#L458

Now, we are computing oLRP exactly, instead of iterating over confidence scores, which is an approximation. So, in this version you will not see a loop over the confidence scores. Feel free to get in touch if something is not clear.

priteshgohil commented 3 years ago

Thank you for your message and link to the new repository. I will check it later and confirm if it is working as expected.

I have spent two days and after debugging it understood how LRP is calculated😄. But you guys did really good work. I understand if one takes time to understand AP calculation, so the LRP. I guess one table in the readme file with LRP calculation at a different threshold s will help upcoming readers and users.

Q1: Well, I have another question, I just want to make sure if I understood it correctly. While we calculate Localization Error, we do sum of only True Positive (TP) IoU and False Positive (FP) IoU is set to zero. Is this correct? So if my threshold s is 0.3 and I have 3 TP with IoU [0.9, 0.8, 0.65], and 8 FP all IoU is set zero. So I only consider that three IoU for LocError right?

Q2: How the TP with multiple detections for single ground truth is selected? eg. two predictions have following values for the same ground truth,

pred1: IoU = 0.88, conf = 0.92
pred2: IoU = 0.95, conf = 0.85

So which one is selected? In the case of PASCAL VOC AP calculation, pred2 is selected (highest IoU). Do we use the same concept for LRP?

kemaloksuz commented 3 years ago

Hi,

Answer to Q1: Yes you are right. In fact, in Equation 5 of the paper (compact definition of LRP), you can notice that each TP contributes to the error by its IoU normalized between 0 and 1, and each FP directly adds 1 to the nominator. So, in terms of localisation, only TPs contributes, as you said.

Answer to Q2: We follow how COCO toolkit matches ground truths and TPs: the one with the larger confidence score is a TP (pred 1 in your case), and the other one is a duplicate, hence a FP. As a result, pred1 contributes to the error by its localisation error (which is 0.12/0.5=0.24), and pred2 directly adds 1 to the error. You can check lines 195-236 in the following script to see the matching:

https://github.com/cancam/LRP/blob/master/cocoLRPapi-master/PythonAPI/pycocotools/cocoevalLRP.py

Sorry for the late reply and thanks for the recommendations.

kemaloksuz commented 3 years ago

Closing this issue due to inactivity, please reopen if needed.