MIC-DKFZ / dynamic-network-architectures

Apache License 2.0
66 stars 27 forks source link

feat: enable encoder-only architecture for nnU-Net #6

Open strasserpatrick opened 2 months ago

strasserpatrick commented 2 months ago

Hello, thanks for the great work.

I am exploring self-supervised pretraininig for nnU-Net. For that, I do encoder-only pretraining and then transfering the learned encoder weights to the final full U-Net architecture on the finetuning.

With this little adjustment, the workflow for that is quite simple.

  1. I follow the guide of nnU-Net for pretraining and finetuning of nnU-Net
  2. I edit the plans.json file for the pretraining configuration and change the network architecture to the PlainConvEncoder. With the kwargs that I add with this PR, the additional decoder configuration used for the finetuning gets simply ignored, no more plans file editing.
...
"architecture": {
                "network_class_name": "dynamic_network_architectures.building_blocks.plain_conv_encoder.PlainConvEncoder",
                "arch_kwargs": {
                    "n_stages": 6,
                    "features_per_stage": [
                        32,
                        64,
                        128,
                        256,
                        320,
                        320
                    ],
                    "conv_op": "torch.nn.modules.conv.Conv3d",
                    "kernel_sizes": [
...

When finished, I plan to do a PR on nnU-Net for the self-supervised learning, if you are interested in that :)

With the nnU-Net and its plans files, everything is nice and configurable. The additional kwargs allow me to quickly only initialize the encoder in nnUNetTrainers allowing me the framework for self-supervised pretraining which is all about training the feature-extractor (encoder) with pseudo-supervised tasks.

Let me know what you think of this