SharifAmit / RVGAN

[MICCAI'21] [Tensorflow] Retinal Vessel Segmentation using a Novel Multi-scale Generative Adversarial Network
BSD 3-Clause "New" or "Revised" License
87 stars 20 forks source link

pretrained model of chase can't be loaded #13

Closed Chenguang-Wang closed 2 years ago

Chenguang-Wang commented 2 years ago

Hello, the pretrained model of CHASE can't be loaded. Error message is ValueError: Shapes (7, 7, 4, 128) and (64, 4, 7, 7) are incompatible.

In addition, could you provide the trained model? I have tried many times, but I can't get the f1 score near which descaribed in the paper. Until now, the best F1 score trained on drive is 0.78, Se is 0.74. The results on STARE is better(f1=0.8030 Se=0.8191). The training process is time-consuming(I use a card of 2080ti).

SharifAmit commented 2 years ago

Hi, can you try load model, rather than using load weights

For example.

from tensorflow import keras
g_model_coarse= keras.models.load_model('coarse_model.h5')
g_model_fine = keras.models.load_model('fine_model.h5')
Chenguang-Wang commented 2 years ago

It didn't work. Error msg is ValueError: Unknown layer: ReflectionPadding2D

SharifAmit commented 2 years ago

Please write this code before loading the model.

import tensorflow as tf
from keras.layers import Layer, InputSpec,

class ReflectionPadding2D(Layer):
    def __init__(self, padding=(1, 1), **kwargs):
        if type(padding) == int:
            padding = (padding, padding)
        self.padding = padding
        self.input_spec = [InputSpec(ndim=4)]
        super(ReflectionPadding2D, self).__init__(**kwargs)

    def compute_output_shape(self, s):
        """ If you are using "channels_last" configuration"""
        return (s[0], s[1] + 2 * self.padding[0], s[2] + 2 * self.padding[1], s[3])

    def call(self, x, mask=None):
        w_pad,h_pad = self.padding
        return tf.pad(x, [[0,0], [h_pad,h_pad], [w_pad,w_pad], [0,0] ], 'REFLECT')
Chenguang-Wang commented 2 years ago

It didn't work. Same error.

SharifAmit commented 2 years ago

Hi,

I re-uploaded the weights for Chase. Its the same download link.

Can you try again with your original code?

Thanks

Download link

https://drive.google.com/drive/folders/1e_dMNjwsPo9ykoYuy3Fn0cmTg1FH8K5H?usp=sharing
Chenguang-Wang commented 2 years ago

Yes,it worked. Thanks! Did you update the weights for DRIVE and STARE?

SharifAmit commented 2 years ago

Hi,

For the STARE and DRIVE, can you try stride=3 for generating the crops.

Chenguang-Wang commented 2 years ago

01_64 This is the predicting image1 I use pretrained model Drive to generate with stride of 64. I used other model to generate images with stride of 3, but the f1 just improve less than 1%.

SharifAmit commented 2 years ago

Hi

I just tested the DRIVE models and its giving me good outputs.

Check the outputs in the following shared drive

https://drive.google.com/drive/folders/1bEqtQj8P6iNXQASJdIY42W05MVjYdR48?usp=sharing

Also I have written a Google Colab Notebook which was used to generate the outputs.

Try to replicate the code by yourself from the notebook

https://colab.research.google.com/drive/1Emoz0rdRgYauDq7u0DBrmb2VFdrxyqpI?usp=sharing

Hope this helps !

Thanks

Chenguang-Wang commented 2 years ago

Hello, I find where the problem is. In the SFA block, I connect the first added tensor to the output instead of connecting the input to the output. But the weight can be loaded in. It seems that this connection is important and it's interesting. Using the pretrained model, I finally get the correct result. But stride 3 still doesn't improve a lot.

Another question How do you calculate the mean IOU and SSIM? It's so high. TP/(TP+FP+FN) ? I calculate mIOU using this which is lower. It's very nice of you.

Thank you for help.

SharifAmit commented 2 years ago

Hi,

I have uploaded the eval,.py code (link : https://github.com/SharifAmit/RVGAN/blob/master/eval.py)

I am closing the issue for now. If I find any problem with the results please open this issue again.

Thanks

martin-liao commented 1 year ago

Hi,

I have uploaded the eval,.py code (link : https://github.com/SharifAmit/RVGAN/blob/master/eval.py)

I am closing the issue for now. If I find any problem with the results please open this issue again.

Thanks

Thanks for your wonderful work! However, I am still confused about the mIoU calculation. In my opinion, we always calculate IoU for each class (2 classes in the vessel segmentation task, background and vessel), then report the average value as mIoU. But the eval,.py code (https://github.com/SharifAmit/RVGAN/blob/master/eval.py) utilizes the jaccard_similarity_score function to sum correct predictions for all categories and calculate the "total" IoU. It's a little strange...

SharifAmit commented 1 year ago

@martin-liao In eval.py we used the normalize=True flag in the jaccard_similarity_score function, which calculates the average Jaccard similarity coefficient not the sum of the correct prediction.

https://scikit-learn.org/0.15/modules/generated/sklearn.metrics.jaccard_similarity_score.html

From the documentation, normalize : bool, optional (default=True) If False, return the sum of the Jaccard similarity coefficient over the sample set. Otherwise, return the average of Jaccard similarity coefficient.

martin-liao commented 1 year ago

Thanks for your response! Yeah, I disregarded the normalize option. You are right.

martin-liao commented 1 year ago

Another question: all the mIoU indexes reported in the manuscript are really high (e.g., U-Net, miou=0.9536 on CHASE-DB1). However, when I train the model with the mmseg toolbox following your settings, the mIoU is significantly lower. I was wondering if any trick (like NMS?) used during evaluation ?

martin-liao commented 1 year ago

Furthermore, the DconnNet (accepted by CVPR2023) reported that the U-Net achieved iou=59.3 of vessel class on the CHASE-DB1 dataset. I know that the train and evaluation settings, software and hardware environments are different, but the performance gap of U-Net reported between RV-GAN and U-Net is too large.