ZFTurbo / classification_models_3D

Set of models for classifcation of 3D volumes
MIT License
135 stars 31 forks source link

3D DenseNet #10

Open alexopoulosanastasis opened 2 years ago

alexopoulosanastasis commented 2 years ago

Hello and sorry to bother you beforehand,

I am currently conducting my master thesis project and I am trying to implement a 3D DenseNet-121 with knee MRIs as input data. While I was searching on how to implement a 3D version of the DenseNet I came across your repository and tried to change it for my application.

I have some issues regarding my try and I didn't know where else to ask about it and again I am sorry if I am completely of topic asking them here.

Firstly, my input shapes are (250,320,18,1) and when I give them as input to the 3D DenseNet I developed with stride_size=1 for my Conv_block and pooling_size=(2,2,2) and strides=(2,2,1) for my AveragePooling3D layer in the transition block, the model is constructed properly with the specific input_size, while when I am trying to load a DenseNet121 from classification_models_3d.tfkeras classifiers I am unable to construct it with input_shape(250,320,18,1), stride_size=1 and kernel_size=2. It gives as an error "Negative dimension size... for node pool4_pool/AvgPool3D". Is there a way to specifically define the strides for AvgPool3D layer in the transition block?

And secondly, I was thinking to load the 3D weights to my 3D DenseNet 121, is there a folder in your repository where I can find your pre-trained weights on imagenet??

Again thank you for having this repository publicly available and sorry if I am completely of topic asking such things here.

I look forward for you answer, Kind regards, Anastasis

ZFTurbo commented 2 years ago

Hello. Try this:

        type = 'densenet121'
        modelPoint, preprocess_input = Classifiers.get(type)
        model = modelPoint(
            input_shape=(250, 320, 18, 1),
            include_top=False,
            stride_size=(
                (2, 2, 2),
                (2, 2, 2),
                (2, 2, 2),
                (2, 2, 2),
                (2, 2, 1),
            ),
            weights=None,
        )
        print(model.summary())

Your error was caused by last dimension 18. By default there are 5 poolings of size 2. You need at least 32 for this dimension. In my example I didn't do poling on last stage.

You can use image net weights for 3 channels volumes, like (250, 320, 18, 3). It's actually easy to adapt weights for your case. Or you can just duplicate your input data 3 times before giving to model.

alexopoulosanastasis commented 2 years ago

Thank you for your answer. I will try and implement the changes you suggested. Unfortunately my MRI input data do not have more than 18 slices in depth that is why I tried to change the pooling stride size.

Thanks again for your reply!!!

theresahiu commented 1 year ago

Hello, I have a related question about using imagenet weights on a 1 channel 3D input images. Can I know what method can be used to adapt the weights to the 1 channel input MRI image (110,110,110,1)?

CarlosNacher commented 1 year ago

Hola, tengo una pregunta relacionada con el uso de pesos de imagenet en imágenes de entrada 3D de 1 canal. ¿Puedo saber qué método se puede usar para adaptar los pesos a la imagen de resonancia magnética de entrada de 1 canal (110,110,110,1)?

@theresahiu Has conseguido resolverlo? tengo el mismo problema