Closed Nimbus2002 closed 1 year ago
I think it is a very common issue in fundus image enhancement, especially for the CycleGAN-based model. The model under contrastive constraint (like CUT and I-SECRET) seems to work well because the objective function aims to maintain the local information during the enhancement process.
By the way, to solve this problem, you can try to remove the black area for the whole EyeQ dataset. The related code is here (in degrade_eyeq.py):
def preprocess(img):
mask, bbox, center, radius = get_mask(img)
r_img = mask_image(img, mask)
r_img, r_border = remove_back_area(r_img, bbox=bbox) # Uncomment this line
mask, _ = remove_back_area(mask, border=r_border) # Uncomment this line
#r_img, sup_border = supplemental_black_area(r_img)
#mask, _ = supplemental_black_area(mask, border=sup_border)
print(r_img.shape)
print(mask.shape)
return r_img, (mask * 255).astype(np.uint8)
In this way, you can remove all the meaningless black backgrounds, which can help the model avoid generating fake circles. I hope this trick can solve your problem :)
This trick is also used in CofeNet. We observe that the contrastive learning based model is not sensitive to this step, so we remove it.
Oh, thank you so much! Ill check out CofeNet too
I have one more question, did you use that "remove back area" inside your model too as well as using it when making the degraded_good datasets? Actually, it is hard for me to follow the pipeline due to my lack of python skills:(. It will be so much help for me if you could point out where you implement the 'remove back area' function while making the training model.
Hope you have a joyful day😁
python tools/degrade_eyeq.py --degrade_dir ${DATA_PATH}$ --output_dir $OUTPUT_PATH$ --mask_dir ${MASK_PATH}$ --gt_dir ${GT_PATH}$.
The OUTPUT_PATH contains the processed dataset, which will be used in training. After this preprocessing, you can train the model and I think it will solve your problem :)
I hope this reply can help you. Have a nice day :)
humm..
in the degraded_eyeq.py, "degrade_dir" are added to the args
args.add_argument('--degrade_dir', type=str, default='', help='degrade EyeQ dir')
args.add_argument('--gt_dir', type=str, default='', help='high quality cropped image dir')
args.add_argument('--output_dir', type=str, default='./temp', help='degraded output dir')
args.add_argument('--mask_dir', type=str, default='./temp', help='mask output dir')
but in the "name == 'main':" part there is no "degrade_dir" but "test_dir"
print(args.test_dir)
print(os.listdir(args.test_dir))
image_list = sorted(glob(os.path.join(args.test_dir, '*')))
print(image_list)
for image_path in tqdm(image_list):
results.append(pool.apply_async(run, (image_path, args.output_dir, args.gt_dir, args.mask_dir)))
pool.close()
for res in results:
res.get()
So I changed the degrade _dir to test_dir to run the code. And I thought that (a) degrade_dir(=test_dir) : input [crop_good] (b) gt_dir: output [crop_good but resized - bigger] (c) output_dir: output [degraded_good] (d) mask_dir: output [mask of the fundus image with the perfect circle] is my interpretation right?
And I just tried out what you said(uncomment 2 lines and run degrad_eyeq.py) but the image in the output dir are like this which is degraded and have some piece of circle made. the inputted image of this was the image below.
by the way I set the 4 path as, (a) degrade_dir(=test_dir) : input [crop_good]-> from eyeQ datasets (b) gt_dir: output [crop_good but resized - bigger] -empty dir (c) output_dir: output [degraded_good] -empty dir (d) mask_dir: output [mask of the fundus image with the perfect circle] -empty dir
Sorry to interrupt you with the long questions and thank you so much again.
I am sorry that I make some confusion in my work.
Fundus image enhancement is quite challenging, since we should enhance low-quality images while avoiding clinical information modifications. I truly hope these suggestions can help you.
And thanks for your pulling issues. I have corrected the typos in the degradation codes.
args.add_argument('--test_dir', type=str, default='', help='degrade EyeQ dir') # update
but not deal with this kind of image (because it has clean circle edge)
Did I understand correctly?
3-1. Yeah I am using Cycle GAN based (and used vanilla cyclegan as well)
but the thing is that if the input image is like the image on the left(I meant these kinds of images as imperfect circle image), cycleGAN makes strange reconstruction to make the fudus "a perfect circle" But because of the blue part, the image gets low FIQA.
However, your ISECRET model seems to enhance the image maintaining the shape which is surprising and what I want to do.
3-2. degrade_eyeq.py change the shape as well as degrading the image. is it okay?
3-3. I noticed that the images you showed me are not square, did you crop the image? or you put the image which is not square?
Your words really help me thank you sooooo much🥺🥺
I know the potential problem you meet. I just resize the image to 512 x 512, the example I gave has been resized back to the original resolution. I guess you want to maintain the square image and then you pad black areas on the top and bottom of the imperfect circle, right? I suggest you do not pad and resize the image directly, like the red box in the following picture.
Aha That was the key! you just resized the image not add padding!
Thank you so much!
Did you just resize the image like the below image to 512 x 512 and used for training? Wouldn't that degrade the resolution of the images if the images are resized to square by interpolation?
Thank you :)
I totally got it! Thank you sooooo much
Is converting image to the original ratio after training also on your code?
Hello, first thank you so much for sharing your code! :) It helped me a lot!
I have one question, Did you use all the data in EyeQ dataset? As you may know, some images don't have whole circle of the fundus like the image below.
I have trying to use the same data as you did (but with different model), but because of the imperfect circle of some fundus images, high quality images are not generated well:( And I think it is because the model tries make the circle perfect like the below images.
Do you have some idea to overcome this problem? You seem to overcome this issue (if you have used the imperfect circle fundus images) since your FIQAs are high!
Thanks a lot in advance:>