MIC-DKFZ / nnUNet

Apache License 2.0
5.79k stars 1.74k forks source link

Modifying nnUNet topology from Segmentation to Classification #1843

Closed MustafaKadhim closed 8 months ago

MustafaKadhim commented 10 months ago

Hi Fabian and others! I have used nnUNet on a dataset before in aim to segemtent labels in my images. The model preformed very well! Now I would like to reuse the same model, with the pretrained weights from before on the same dataset, but instead for a binary classification purpose. My idea was to cut the model at the bottleneck and add a Dense layer there.

Do you guys find this possible to do with the current version of nnUNet? or shall I find another approach?

Thank you so much for your time!

Musti

ykirchhoff commented 10 months ago

Hi Musti,

first of all I would in general not recommend using nnUNet for classification. It really shines at segmentation, but there are better out-of-the-box solutions for classification. Having said that, there are ways to use nnUNet for classification and it can actually be interesting to train it in parallel so that you can use the supervision signal from the classification and segmentation task. The biggest problem is probably the patchwise training and inference of nnUNet, which makes it rather difficult to use this for classification. If you want to try that I can help guide you through what you would need to do.

Best, Yannick

MustafaKadhim commented 10 months ago

Hi Yannick!

First of all thank you so much for the quick reply and sharing your expertise on the topic! For a PhD student like myself, it is highly appreciated πŸ™

The idea is to not use nnUNet fully for classification but rather some of the parts responsible for featur detection down to the bottleneck where I can then add a CNN to do the classification based on the collected features from the pretrained on seg nnUNet.

Do you still find this approach less interesting than just using, let's say a ResNet from scratch for classification?

Thank you and the team so much for your help!

Best

Musti PhD student Department of Medical Physics Lund, Sweden πŸ‡ΈπŸ‡ͺ

Sent from Outlook for Androidhttps://aka.ms/AAb9ysg


From: Yannick Kirchhoff @.> Sent: Monday, December 11, 2023 2:31:18 AM To: MIC-DKFZ/nnUNet @.> Cc: MustafaKadhim @.>; Author @.> Subject: Re: [MIC-DKFZ/nnUNet] Modifying nnUNet topology from Segmentation to Classification (Issue #1843)

Hi Musti,

first of all I would in general not recommend using nnUNet for classification. It really shines at segmentation, but there are better out-of-the-box solutions for classification. Having said that, there are ways to use nnUNet for classification and it can actually be interesting to train it in parallel so that you can use the supervision signal from the classification and segmentation task. The biggest problem is probably the patchwise training and inference of nnUNet, which makes it rather difficult to use this for classification. If you want to try that I can help guide you through what you would need to do.

Best, Yannick

β€” Reply to this email directly, view it on GitHubhttps://github.com/MIC-DKFZ/nnUNet/issues/1843#issuecomment-1849188666, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ARKV36BOUMUIRGBAURXSL2TYIZO6NAVCNFSM6AAAAABAN56QY2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBZGE4DQNRWGY. You are receiving this because you authored the thread.Message ID: @.***>

ykirchhoff commented 10 months ago

Hi Musti,

no problem at all. Let me elaborate a bit. Reusing the Encoder trained with nnUNet should be very straight forward in any other training pipeline, you just need the definition of the architecture. The main problem I see is that your network is trained on the resampled data and with patches. Therefore the question is how to transfer it to your classification task. If you resample your data to a common shape, the changed spacings might lead to a degradation of the extracted features and make it more or less useless, at least compared to something like a ResNet. We recently used a sliding window approach for training and inference with a ResNet (not pretrained with nnUNet), but using the preprocessed data from nnUNet for the TDSC-ABUS challenge, which gave decent results. That might be a way to reuse the nnUNet encoder, which can actually benefit the performance. If you feel like trying that, it would certainly be an interesting study, but you might just be wasting your time. In the end ResNets are very strong for classification tasks.

Sorry, that I can't give a more definitive answer, but if you want to try it, I am happy to help.

Best, Yannick

MustafaKadhim commented 10 months ago

Hi Yannick!

Thank you for the detailed answer and advice. Based on your tips I will then take the matter further with my supervisors to discuss which path we should take to proceed. I have recently started using Pytorch and nnUNet and it has been really fun to play around with such a powerful segmentation model. So it will be cool if we can use it for many purposes instead of only segmentation πŸ˜ƒ

Talk to you soon!

Thank you for all your help and guiding,

Musti

From: Yannick Kirchhoff @.> Sent: Monday, December 11, 2023 3:43 PM To: MIC-DKFZ/nnUNet @.> Cc: MustafaKadhim @.>; Author @.> Subject: Re: [MIC-DKFZ/nnUNet] Modifying nnUNet topology from Segmentation to Classification (Issue #1843)

Hi Musti,

no problem at all. Let me elaborate a bit. Reusing the Encoder trained with nnUNet should be very straight forward in any other training pipeline, you just need the definition of the architecture. The main problem I see is that your network is trained on the resampled data and with patches. Therefore the question is how to transfer it to your classification task. If you resample your data to a common shape, the changed spacings might lead to a degradation of the extracted features and make it more or less useless, at least compared to something like a ResNet. We recently used a sliding window approach for training and inference with a ResNet (not pretrained with nnUNet), but using the preprocessed data from nnUNet for the TDSC-ABUS challengehttps://tdscabus.github.io/, which gave decent results. That might be a way to reuse the nnUNet encoder, which can actually benefit the performance. If you feel like trying that, it would certainly be an interesting study, but you might just be wasting your time. In the end ResNets are very strong for classification tasks.

Sorry, that I can't give a more definitive answer, but if you want to try it, I am happy to help.

Best, Yannick

β€” Reply to this email directly, view it on GitHubhttps://github.com/MIC-DKFZ/nnUNet/issues/1843#issuecomment-1850216947, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ARKV36D27MGNVYKUDXR7P4DYI4LVNAVCNFSM6AAAAABAN56QY2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJQGIYTMOJUG4. You are receiving this because you authored the thread.Message ID: @.***>

ykirchhoff commented 10 months ago

Hi Musti,

Glad to hear that you enjoy using nnUNet so much. It is really nice to see, how people use it for all kinds of applications, especially going beyond the "intended" use for simple segmentation.

I am curious to hear how you will continue, feel free to keep me updated!

Best, Yannick

ykirchhoff commented 10 months ago

Hi Musti,

just as an FYI, this publication might be interesting for your case [Poster, Paper]. At least from the poster there are still some open questions and I don't really trust the results, especially as there are other publications pointing in a bit different direction, but still could be worth considering. And there are probably more detailed analyses in the paper, just didn't find the time yet to look at that.

Best, Yannick

MustafaKadhim commented 10 months ago

Hi Yannick!

Thank you so much for the article tips. I will read through it! Right now, I'm playing around with dividing my images into 64 cube patches and see how that plays out in regard to combining a Resnet18 + Unet for classification or segmentation. Or maybe both of them at the same time by using proper loss functions and accuracy metrics.

Will keep you updated!

Musti

Sent from Outlook for Androidhttps://aka.ms/AAb9ysg


From: Yannick Kirchhoff @.> Sent: Thursday, December 14, 2023 3:46:16 PM To: MIC-DKFZ/nnUNet @.> Cc: MustafaKadhim @.>; Author @.> Subject: Re: [MIC-DKFZ/nnUNet] Modifying nnUNet topology from Segmentation to Classification (Issue #1843)

Hi Musti,

just as an FYI, this publication might be interesting for your case [Posterhttps://neurips.cc/media/PosterPDFs/NeurIPS%202023/71164.png?t=1699319877.7816777, Paperhttps://openreview.net/forum?id=b8xowIlZ7v]. At least from the poster there are still some open questions and I don't really trust the results, especially as there are other publications pointing in a bit different direction, but still could be worth considering. And there are probably more detailed analyses in the paper, just didn't find the time yet to look at that.

Best, Yannick

β€” Reply to this email directly, view it on GitHubhttps://github.com/MIC-DKFZ/nnUNet/issues/1843#issuecomment-1855983227, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ARKV36DUTAJLD3YNUZBGSPTYJMGLRAVCNFSM6AAAAABAN56QY2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJVHE4DGMRSG4. You are receiving this because you authored the thread.Message ID: @.***>

ykirchhoff commented 9 months ago

Hi Musti,

I was just checking my assigned issues and was wondering if you already have some results to share?

Best, Yannick

MustafaKadhim commented 9 months ago

Hi Yannick!

Thanks for the follow up!

So what we did was that we trained the nnUNet to segment some regions of interest and trained the model until we reached a satisfactory Dice score. Then we took only the encoder part, reused the weights from the segmentation part, and attached a Resnet18/30/50 as a "decoder", added some nice dense layers, and turned it to a regression model to reuse on the same data to predict a continuous variable. The model seems to achieve higher accuracy (low MSE) than if when we only use a ResNet for the task. Very exciting stuff!

The research group is currently working with the data and in the process to publish something in the near future. I will keep you posted ofc.

Hope this helps, otherwise don't hesitate to reach out through LinkedIn for deeper insights 😊

Have a nice weekend,

Musti

Sent from Outlook for Androidhttps://aka.ms/AAb9ysg


From: Yannick Kirchhoff @.> Sent: Friday, January 26, 2024 5:33:31 PM To: MIC-DKFZ/nnUNet @.> Cc: MustafaKadhim @.>; Author @.> Subject: Re: [MIC-DKFZ/nnUNet] Modifying nnUNet topology from Segmentation to Classification (Issue #1843)

Hi Musti,

I was just checking my assigned issues and was wondering if you already have some results to share?

Best, Yannick

β€” Reply to this email directly, view it on GitHubhttps://github.com/MIC-DKFZ/nnUNet/issues/1843#issuecomment-1912347766, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ARKV36CDTIHD3AYHXH6OSFTYQPLFXAVCNFSM6AAAAABAN56QY2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJSGM2DONZWGY. You are receiving this because you authored the thread.Message ID: @.***>

ykirchhoff commented 8 months ago

Hi Musti,

that is really nice to hear, I am already looking forward to reading the publication! I would suggest that we close this issue as solved and move further discussions to LinkedIn :)

Best, Yannick

MustafaKadhim commented 8 months ago

Thanks for all the help!

Musti