CaglarGuher / IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation

5 stars 0 forks source link

AUC-PR calculation for emty masks? #2

Closed osivaz61 closed 9 months ago

osivaz61 commented 9 months ago

Hi,

Some images do not have lesions (some SE lesions images have empty masks). How did you handle this when calculating the AUC-PR for empty masks?

Also, can you share the dice score results?

Thanks

CaglarGuher commented 9 months ago

The algorithm takes every test images with corresponding groundtruths and combine into one big image then calculates the auprc , as the previous papers suggested.

osivaz61 commented 9 months ago

I want to ask to be sure. Let's assume we have 24 images whose sizes are 400x400. Do you want to say aucpr get parameters like this aucpr(pred, gt) where pred and gt are 3840000x1 arrays?

3 840 000 = 400x400x24

CaglarGuher commented 9 months ago

More like if you have 24 images with 400400 resolution , the final image with corresponding gt will be 9600096000 (((40024)(400*24))) ,, because the final image is too big , you will need min of 16gb ram to run this aucpr calculation without any problem.

On Sun, Feb 11, 2024 at 11:18 PM osivaz61 @.***> wrote:

I want to ask to be sure. Let's assume we have 24 images whose sizes are 400x400. Do you want to say aucpr get parameters like this aucpr(pred, gt) where pred and gt are 3840000x1 arrays?

3 840 000 = 400x400x24

— Reply to this email directly, view it on GitHub https://github.com/CaglarGuher/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation/issues/2#issuecomment-1937858407, or unsubscribe https://github.com/notifications/unsubscribe-auth/AVCZSI6CFXQTCQCWOT4BAC3YTERRZAVCNFSM6AAAAABDB4VSECVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMZXHA2TQNBQG4 . You are receiving this because you commented.Message ID: <CaglarGuher/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation/issues/2/1937858407 @github.com>

osivaz61 commented 9 months ago

Actually, I am trying to understand the scenario that does not use a patch mechanism. Normally without a patch approach, there are 54 train images and 27 test images in the IDRID dataset. However, the number of images including SE lesions is 27 and 14 for the train and test sets respectively.

When we want to segment all lesions with one model (multi-class) we have to use all train and test images. The point I am trying to understand is how can I handle the empty mask scenario when I want to calculate aucpr for SE lesion.

Because the some of 27 test images include SE lesion, some of not.

CaglarGuher commented 9 months ago

You basicly create black empty groundtruths for the model (for example with paint) because no groundtruths means there is no lesion for that spesific image.I hope I understood your question:)

12 Şub 2024 Pzt 19:29 tarihinde osivaz61 @.***> şunu yazdı:

Actually, I am trying to understand the scenario that does not use a patch mechanism. Normally without a patch approach, there are 54 train images and 27 test images in the IDRID dataset. However, the number of images including SE lesions is 27 and 14 for the train and test sets respectively.

When we want to segment all lesions with one model (multi-class) we have to use all train and test images. The point I am trying to understand is how can I handle the empty mask scenario when I want to calculate aucpr for 27 images for SE lesion.

Because the some of 27 test images include SE lesion, some of not.

— Reply to this email directly, view it on GitHub https://github.com/CaglarGuher/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation/issues/2#issuecomment-1939084243, or unsubscribe https://github.com/notifications/unsubscribe-auth/AVCZSI72JEYU7CAZ2WTZZ2TYTI7ODAVCNFSM6AAAAABDB4VSECVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMZZGA4DIMRUGM . You are receiving this because you commented.Message ID: <CaglarGuher/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation/issues/2/1939084243 @github.com>

osivaz61 commented 9 months ago

When I want to calculate aucpr for an empty mask and predicted values, the method that calculates aucpr gives a warning like that 'the mask should contain 1s' because all values are 0. I want to ask how you handle this problem when you apply multi-class segmentation.

It is reasonable to set aucpr to 1 or 0 for an empty mask? Because I am trying to apply multi-class segmentation?

Thanks

osivaz61 commented 9 months ago

Thanks to you I investigated the original PBDA github code and I found that the gitHub code of the PBDA paper calculates the auc-pr by thinking of all test images (27) as one image. Not calculating aucpr for every image.

There is a different pixel-level retina dataset whose name is FGADR. I am trying to apply multi-class segmentation to it. But there are also empty masks in the FGADR dataset. I can not reproduce the result of FGADR dataset paper. Did you try to segment the FGADR dataset before?

CaglarGuher commented 9 months ago

No , I have not try the segmentation for the FGADR dataset

14 Şub 2024 Çar 21:35 tarihinde osivaz61 @.***> şunu yazdı:

There is a different pixel-level retina dataset whose name is FGADR. I am trying to apply multi-class segmentation to it. But there are also empty masks in the FGADR dataset. I can not reproduce the result of FGADR dataset paper. Did you try to segment before?

— Reply to this email directly, view it on GitHub https://github.com/CaglarGuher/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation/issues/2#issuecomment-1944384923, or unsubscribe https://github.com/notifications/unsubscribe-auth/AVCZSI4VTQILLBU7N3RYMRDYTT7VNAVCNFSM6AAAAABDB4VSECVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNBUGM4DIOJSGM . You are receiving this because you commented.Message ID: <CaglarGuher/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation/issues/2/1944384923 @github.com>