QtacierP / ISECRET

I-SECRET: Importance-guided fundus image enhancement via semi-supervised contrastive constraining
MIT License
24 stars 2 forks source link

Dataset- Did you use all the eyeQ data? #1

Closed Nimbus1997 closed 1 year ago

Nimbus1997 commented 1 year ago

Hello, first thank you so much for sharing your code! :) It helped me a lot!

I have one question, Did you use all the data in EyeQ dataset? As you may know, some images don't have whole circle of the fundus like the image below. image

I have trying to use the same data as you did (but with different model), but because of the imperfect circle of some fundus images, high quality images are not generated well:( And I think it is because the model tries make the circle perfect like the below images. image

Do you have some idea to overcome this problem? You seem to overcome this issue (if you have used the imperfect circle fundus images) since your FIQAs are high!

Thanks a lot in advance:>

QtacierP commented 1 year ago

I think it is a very common issue in fundus image enhancement, especially for the CycleGAN-based model. The model under contrastive constraint (like CUT and I-SECRET) seems to work well because the objective function aims to maintain the local information during the enhancement process.

By the way, to solve this problem, you can try to remove the black area for the whole EyeQ dataset. The related code is here (in degrade_eyeq.py):

def preprocess(img):
    mask, bbox, center, radius = get_mask(img)
    r_img = mask_image(img, mask)
    r_img, r_border = remove_back_area(r_img, bbox=bbox) # Uncomment this line
    mask, _ = remove_back_area(mask, border=r_border) # Uncomment this line
    #r_img, sup_border = supplemental_black_area(r_img)
    #mask, _ = supplemental_black_area(mask, border=sup_border)
    print(r_img.shape)
    print(mask.shape)
    return r_img, (mask * 255).astype(np.uint8)

In this way, you can remove all the meaningless black backgrounds, which can help the model avoid generating fake circles. I hope this trick can solve your problem :)

QtacierP commented 1 year ago

This trick is also used in CofeNet. We observe that the contrastive learning based model is not sensitive to this step, so we remove it.

Nimbus1997 commented 1 year ago

Oh, thank you so much! Ill check out CofeNet too

I have one more question, did you use that "remove back area" inside your model too as well as using it when making the degraded_good datasets? Actually, it is hard for me to follow the pipeline due to my lack of python skills:(. It will be so much help for me if you could point out where you implement the 'remove back area' function while making the training model.

Hope you have a joyful day😁

QtacierP commented 1 year ago
  1. Yes. If you uncomment the two lines I just mentioned, then all the datasets (degraded_good, crop_usable, crop_good) will be processed. But this operation is not inside the model, you should do it before the training stage.
  2. If you want to remove these meaningless areas during training, you should first uncomment the two lines I mentioned before training and then run
    python tools/degrade_eyeq.py --degrade_dir ${DATA_PATH}$ --output_dir $OUTPUT_PATH$ --mask_dir ${MASK_PATH}$ --gt_dir ${GT_PATH}$.

    The OUTPUT_PATH contains the processed dataset, which will be used in training. After this preprocessing, you can train the model and I think it will solve your problem :)

I hope this reply can help you. Have a nice day :)

Nimbus1997 commented 1 year ago

humm..

  1. I'd run your model with the dataset created with the 2 lines you mentioned commented out, but it still produced imperfect circle like the inputted images. (image under) Do you know why?

image

  1. in the degraded_eyeq.py, "degrade_dir" are added to the args

    args.add_argument('--degrade_dir', type=str, default='', help='degrade EyeQ dir')
    args.add_argument('--gt_dir', type=str, default='', help='high quality cropped image dir')
    args.add_argument('--output_dir', type=str, default='./temp', help='degraded output dir')
    args.add_argument('--mask_dir', type=str, default='./temp', help='mask output dir')

    but in the "name == 'main':" part there is no "degrade_dir" but "test_dir"

    print(args.test_dir)
    print(os.listdir(args.test_dir))
    image_list = sorted(glob(os.path.join(args.test_dir, '*')))
    print(image_list)
    for image_path in tqdm(image_list):
        results.append(pool.apply_async(run, (image_path, args.output_dir, args.gt_dir, args.mask_dir)))
    pool.close()
    for res in results:
        res.get()

    So I changed the degrade _dir to test_dir to run the code. And I thought that (a) degrade_dir(=test_dir) : input [crop_good] (b) gt_dir: output [crop_good but resized - bigger] (c) output_dir: output [degraded_good] (d) mask_dir: output [mask of the fundus image with the perfect circle] is my interpretation right?

  2. And I just tried out what you said(uncomment 2 lines and run degrad_eyeq.py) but the image in the output dir are like this which is degraded and have some piece of circle made. image the inputted image of this was the image below. image

by the way I set the 4 path as, (a) degrade_dir(=test_dir) : input [crop_good]-> from eyeQ datasets (b) gt_dir: output [crop_good but resized - bigger] -empty dir (c) output_dir: output [degraded_good] -empty dir (d) mask_dir: output [mask of the fundus image with the perfect circle] -empty dir

Sorry to interrupt you with the long questions and thank you so much again.

QtacierP commented 1 year ago

I am sorry that I make some confusion in my work.

  1. I think maybe I got some misunderstanding of your problem. In fact, the imperfect circle cannot be avoided due to camera limitations. The preprocess only aims to remove the meaningless black areas around the circle. I am so sorry that I give an ambiguous illustration.
  2. Yes. The test_dir should be the path of high-quality images, and the output_path should be the path of syntactic low-quality images(which should be used in the supervised branch in I-SECRET),. The gt_dir is the path of ground-truth high-quality images (With some preprocesses, like cropping, resizing, etc.). The mask_dir is not used in our training code, so just ignore it.
  3. Move back to your original issue, why does an imperfect circle influence the enhancement performance? I run I-SECRET again and found nothing wrong in this case: image May I ask you which model you use? If you use a CycleGAN based model, maybe you should employ a relatively high-resolution image during the training(at least 512 x 512). Moreover, avoid using zero padding convolutions in your model (I use the reflection padding in I-SECRET). Lastly, maybe you can try to train your model longer. I observe there are a lot of artifacts on the example of your model's enhancement, which means it seems to be not trained well. Anyway, I suggest you can train a baseline model (like CycleGAN, CUT, and our I-SECRET) for debugging. If the enhancement seems well by using these models, your current model may have some issues.
QtacierP commented 1 year ago

Fundus image enhancement is quite challenging, since we should enhance low-quality images while avoiding clinical information modifications. I truly hope these suggestions can help you.

QtacierP commented 1 year ago

And thanks for your pulling issues. I have corrected the typos in the degradation codes.

args.add_argument('--test_dir', type=str, default='', help='degrade EyeQ dir') # update 
Nimbus1997 commented 1 year ago
  1. Oh so you are saying that the preprocessing only makes image

but not deal with this kind of image (because it has clean circle edge) image

Did I understand correctly?

3-1. Yeah I am using Cycle GAN based (and used vanilla cyclegan as well)

but the thing is that if the input image is like the image on the left(I meant these kinds of images as imperfect circle image), cycleGAN makes strange reconstruction to make the fudus "a perfect circle" But because of the blue part, the image gets low FIQA. image

However, your ISECRET model seems to enhance the image maintaining the shape which is surprising and what I want to do. image

3-2. degrade_eyeq.py change the shape as well as degrading the image. is it okay? image

3-3. I noticed that the images you showed me are not square, did you crop the image? or you put the image which is not square? image

Your words really help me thank you sooooo much🥺🥺

QtacierP commented 1 year ago

I know the potential problem you meet. I just resize the image to 512 x 512, the example I gave has been resized back to the original resolution. I guess you want to maintain the square image and then you pad black areas on the top and bottom of the imperfect circle, right? I suggest you do not pad and resize the image directly, like the red box in the following picture. image

Nimbus1997 commented 1 year ago

Aha That was the key! you just resized the image not add padding!

Thank you so much!

Nimbus1997 commented 1 year ago

Did you just resize the image like the below image to 512 x 512 and used for training? Wouldn't that degrade the resolution of the images if the images are resized to square by interpolation? image

Thank you :)

QtacierP commented 1 year ago
  1. Yes, we just resize this kind of image into 512x512. Surely, it may cause some resolution loss, since we have resized it. However, due to the limitation of GPU memory, we only maintain a relatively high resolution (512 x 512).
  2. Our network is based on U-Net, which has some conditions for the input resolution (it should be the power of 2). Since the aspect ratio of different images is various, if we do not employ a square resizing, we cannot ensure that all of these images can be up-sampled correctly. Therefore, we resize all images into a square. This operation is widely used in the U-shape network.
Nimbus1997 commented 1 year ago

I totally got it! Thank you sooooo much

Is converting image to the original ratio after training also on your code?