lseventeen / FR-UNet

[JBHI2022] Full-Resolution Network and Dual-Threshold Iteration for Retinal Vessel and Coronary Angiograph Segmentation
MIT License
112 stars 24 forks source link

visualization images not that good as that in paper #4

Open victorshengwgit opened 1 year ago

victorshengwgit commented 1 year ago

Congratulations!!Thanks for the wonderful work and open source. I ran the code following your guides but got the predictions like this pre4 Then, I make every pixel which number is not 0 equals 255 and it became like this one image What's more, I also use the pred_b3.png, and the max and min value in array is both 0. So I am wondering is there something wrong with my pipeline? or other things? Looking forward to your advice!

robin-321 commented 1 year ago

I met the same problem,have you solve it?

victorshengwgit commented 1 year ago

Yeah,you may need to retrain then test!

robin-321 commented 1 year ago

Is there an issue with the pretrained weights provided by him? I just want to run a few images on the pretrained weights he provided. If possible, could you share yours? Thank you very much!

lseventeen commented 1 year ago

Sorry, I made a correction later on which caused an issue with the pre-training results. Now, I have reverted to the initial version.

robin-321 commented 1 year ago

I'v retained the model,problem solved,thank you so much!🤞

victorshengwgit commented 11 months ago

I'v retained the model,problem solved,thank you so much!🤞

hi bro, I am wondering could you get as wonderful as the result in paper after you retrained model or use the author's checkpoint in github? I mean this good metrics image

lseventeen commented 10 months ago

I'v retained the model,problem solved,thank you so much!🤞

hi bro, I am wondering could you get as wonderful as the result in paper after you retrained model or use the author's checkpoint in github? I mean this good metrics image

I have tested the weight file I uploaded and it yields the results mentioned above. There might be slight differences in different environments, but the gap should be minimal. What results do you get when you reproduce using the weight file provided?

victorshengwgit commented 10 months ago

I'v retained the model,problem solved,thank you so much!🤞

hi bro, I am wondering could you get as wonderful as the result in paper after you retrained model or use the author's checkpoint in github? I mean this good metrics image

I have tested the weight file I uploaded and it yields the results mentioned above. There might be slight differences in different environments, but the gap should be minimal. What results do you get when you reproduce using the weight file provided?

hi bro, thanks for your reply! Firstly, I cannot see your imgs attached in your reply, could you upload again? Secondly, in my env, your DRIVE weight may not make sense. so I retrained the model and get test results in DRIVE like this image . Looking forward to your early reply!!

victorshengwgit commented 10 months ago

I'v retained the model,problem solved,thank you so much!🤞

hi bro, I am wondering could you get as wonderful as the result in paper after you retrained model or use the author's checkpoint in github? I mean this good metrics image

I have tested the weight file I uploaded and it yields the results mentioned above. There might be slight differences in different environments, but the gap should be minimal. What results do you get when you reproduce using the weight file provided?

sorry bro. I encountered some difficulties, could you give some suggests? Firstly, I run your guides in your github, it does show your wonderful results in your paper like this, 61e812f3505ca0f9781b6411b1a9872 however when I trained your model and test, it is like cb5342db9580733f2fa596555fecc6e 255d8a4f32192565ec52523371c2627

lseventeen commented 10 months ago

I'v retained the model,problem solved,thank you so much!🤞

hi bro, I am wondering could you get as wonderful as the result in paper after you retrained model or use the author's checkpoint in github? I mean this good metrics image

I have tested the weight file I uploaded and it yields the results mentioned above. There might be slight differences in different environments, but the gap should be minimal. What results do you get when you reproduce using the weight file provided?

sorry bro. I encountered some difficulties, could you give some suggests? Firstly, I run your guides in your github, it does show your wonderful results in your paper like this, 61e812f3505ca0f9781b6411b1a9872 however when I trained your model and test, it is like cb5342db9580733f2fa596555fecc6e 255d8a4f32192565ec52523371c2627

This result is already very close, and even the Acc and Spe are a bit higher. However, Sen is relatively low. I have also found in my experiments that Sen is a very sensitive metric with relatively large fluctuations. I think this might be due to inconsistencies in the environment version, or perhaps trying with a different random seed might help.

victorshengwgit commented 10 months ago

I'v retained the model,problem solved,thank you so much!🤞

hi bro, I am wondering could you get as wonderful as the result in paper after you retrained model or use the author's checkpoint in github? I mean this good metrics image

I have tested the weight file I uploaded and it yields the results mentioned above. There might be slight differences in different environments, but the gap should be minimal. What results do you get when you reproduce using the weight file provided?

sorry bro. I encountered some difficulties, could you give some suggests? Firstly, I run your guides in your github, it does show your wonderful results in your paper like this, 61e812f3505ca0f9781b6411b1a9872 however when I trained your model and test, it is like cb5342db9580733f2fa596555fecc6e 255d8a4f32192565ec52523371c2627

This result is already very close, and even the Acc and Spe are a bit higher. However, Sen is relatively low. I have also found in my experiments that Sen is a very sensitive metric with relatively large fluctuations. I think this might be due to inconsistencies in the environment version, or perhaps trying with a different random seed might help.

hi bro, I forgot to ask the second question: your model has been changed after your update. image I am confused that this model is different with paper's, the new model missed residual part.