rafaelpadilla / Object-Detection-Metrics

Most popular metrics used to evaluate object detection algorithms.
MIT License
4.95k stars 1.03k forks source link

Possible bug ? #20

Closed viniciusarruda closed 5 years ago

viniciusarruda commented 6 years ago

It seems that duplicated detections is currently discarted and not marked as false positive as should be.

The code extracted from here:

    if iouMax >= IOUThreshold:
        if det[dects[d][0]][jmax] == 0:
            TP[d] = 1  # count as true positive
            # print("TP")
        det[dects[d][0]][jmax] = 1  # flag as already 'seen'
    # - A detected "cat" is overlaped with a GT "cat" with IOU >= IOUThreshold.
    else:
        FP[d] = 1  # count as false positive
        # print("FP")

Should be:

    if iouMax >= IOUThreshold:
        if det[dects[d][0]][jmax] == 0:
            TP[d] = 1  # count as true positive
            # print("TP")
        else:                                              ## ADDED
            FP[d] = 1  # count as false positive           ## ADDED
        det[dects[d][0]][jmax] = 1  # flag as already 'seen'
    # - A detected "cat" is overlaped with a GT "cat" with IOU >= IOUThreshold.
    else:
        FP[d] = 1  # count as false positive
        # print("FP")

As the original code*:

    % assign detection as true positive/don't care/false positive
    if ovmax>=VOCopts.minoverlap
        if ~gt(i).diff(jmax)
            if ~gt(i).det(jmax)
                tp(d)=1;            % true positive
        gt(i).det(jmax)=true;
            else       %% THIS SHOULD BE ADDED
                fp(d)=1;            % false positive (multiple detection) 
            end
        end
    else
        fp(d)=1;                    % false positive
    end

*Download it here and take a look at line 93 from file VOCevaldet.m inside the folder VOCcode

Or am I missing something in your code that justifies it ? Thank you.

ruslanasa commented 5 years ago

Hi, all,

Any update on this issue?

rafaelpadilla commented 5 years ago

Dear @viniciusarruda,

Thanks for your feedback. Sorry it took me some days to answer, but I needed to perform some tests before getting back to you.

Yes, you are right! The code is a little different than the official implementation, but the difference in the Average Precision is almost undetectable and are not present in every case. The example shown in the README, for instance, is not affected by your suggestion. I believe this is the reason why I did not notice it easily before.

I ran the official PASCAL VOC Matlab code and compared with our results. The results to be compared are presented in the columns AP (our code before) and AP (official matlab code) on table below:

image

Observations:

I kindly ask you to close this issue if that resolves the question.

Thank you again.

viniciusarruda commented 5 years ago

Thank you !