Project-MONAI / research-contributions

Implementations of recent research prototypes/demonstrations using MONAI.
https://monai.io/
Apache License 2.0
998 stars 330 forks source link

Whether do the self-supervised pre-trained weights ("module.layers1.0.downsample.reduction.weight") for Swin UNETR backbone miss "swinVit" ("module.swinVit.layers1.0.downsample.reduction.weight")? #268

Closed shouwangzhe134 closed 1 year ago

shouwangzhe134 commented 1 year ago

Describe the bug In the provided model_swinvit.pt, there are not "swinVit" in the weight kyes of swinVit model.

After state_dict[key.replace("module.", "")] = state_dict.pop(key), the state_dict.keys() becomes image. However, the weights keys of SwinUNETR are image

aoguangcheng commented 1 year ago

The "model_swinvit.pt" is the self-supervised pre-trained weights, and the "swin_unetr.base_5000ep_f48_lr2e-4_pretrained.pt" is the trained Swin Transformer weights which you can run inference directly.

shouwangzhe134 commented 1 year ago

The "model_swinvit.pt" is the self-supervised pre-trained weights, and the "swin_unetr.base_5000ep_f48_lr2e-4_pretrained.pt" is the trained Swin Transformer weights which you can run inference directly.

To train a Swin UNETR with self-supervised encoder weights on a single GPU with gradient check-pointing,I can understand that create a new ordered dict without the module prefix. But in the "model_swinvit.pt", there is not the prefix of "swinVit". I don't think that can load the encoder weights (Swin-ViT, SSL pre-trained).

Where did I go wrong? Can you help me?

aoguangcheng commented 1 year ago

The "model_swinvit.pt" is the self-supervised pre-trained weights, and the "swin_unetr.base_5000ep_f48_lr2e-4_pretrained.pt" is the trained Swin Transformer weights which you can run inference directly.

To train a Swin UNETR with self-supervised encoder weights on a single GPU with gradient check-pointing,I can understand that create a new ordered dict without the module prefix. But in the "model_swinvit.pt", there is not the prefix of "swinVit". I don't think that can load the encoder weights (Swin-ViT, SSL pre-trained).

Where did I go wrong? Can you help me?

This is my training scripts and directory path: ··· CUDA_VISIBLE_DEVICES=0 python main.py --json_list=dataset_0.json --data_dir=./dataset --feature_size=48 --use_ssl_pretrained \ --roi_x=96 --roi_y=96 --roi_z=96 --use_checkpoint --batch_size=1 --max_epochs=5000 --save_checkpoint ··· ├── BTCV │ ├── assets │ │ └── swin_unetr.png │ ├── dataset │ │ ├── dataset_0.json │ │ ├── imagesTr │ │ ├── imagesTs │ │ ├── init.py │ │ └── labelsTr │ ├── main.py │ ├── nohup.out │ ├── optimizers │ │ ├── init.py │ │ ├── lr_scheduler.py │ │ └── pycache │ │ ├── init.cpython-37.pyc │ │ └── lr_scheduler.cpython-37.pyc │ ├── outputs │ │ ├── init.py │ │ └── test1 │ │ ├── img0035.nii.gz │ │ ├── img0036.nii.gz │ │ ├── img0037.nii.gz │ │ ├── img0038.nii.gz │ │ ├── img0039.nii.gz │ │ └── img0040.nii.gz │ ├── pretrained_models │ │ ├── init.py │ │ ├── model_swinvit.pt │ │ └── swin_unetr.base_5000ep_f48_lr2e-4_pretrained.pt │ ├── README.md │ ├── requirements.txt │ ├── runs │ │ ├── init.py │ │ └── test │ │ ├── model_final.pt │ │ └── model.pt │ ├── scripts.sh │ ├── test.py │ ├── trainer.py │ └── utils │ ├── data_utils.py │ ├── init.py │ └── utils.py

aoguangcheng commented 1 year ago

The "model_swinvit.pt" is the self-supervised pre-trained weights, and the "swin_unetr.base_5000ep_f48_lr2e-4_pretrained.pt" is the trained Swin Transformer weights which you can run inference directly.

To train a Swin UNETR with self-supervised encoder weights on a single GPU with gradient check-pointing,I can understand that create a new ordered dict without the module prefix. But in the "model_swinvit.pt", there is not the prefix of "swinVit". I don't think that can load the encoder weights (Swin-ViT, SSL pre-trained). Where did I go wrong? Can you help me?

This is my training scripts and directory path: ··· CUDA_VISIBLE_DEVICES=0 python main.py --json_list=dataset_0.json --data_dir=./dataset --feature_size=48 --use_ssl_pretrained --roi_x=96 --roi_y=96 --roi_z=96 --use_checkpoint --batch_size=1 --max_epochs=5000 --save_checkpoint ··· ├── BTCV │ ├── assets │ │ └── swin_unetr.png │ ├── dataset │ │ ├── dataset_0.json │ │ ├── imagesTr │ │ ├── imagesTs │ │ ├── init.py │ │ └── labelsTr │ ├── main.py │ ├── nohup.out │ ├── optimizers │ │ ├── init.py │ │ ├── lr_scheduler.py │ │ └── pycache │ │ ├── init.cpython-37.pyc │ │ └── lr_scheduler.cpython-37.pyc │ ├── outputs │ │ ├── init.py │ │ └── test1 │ │ ├── img0035.nii.gz │ │ ├── img0036.nii.gz │ │ ├── img0037.nii.gz │ │ ├── img0038.nii.gz │ │ ├── img0039.nii.gz │ │ └── img0040.nii.gz │ ├── pretrained_models │ │ ├── init.py │ │ ├── model_swinvit.pt │ │ └── swin_unetr.base_5000ep_f48_lr2e-4_pretrained.pt │ ├── README.md │ ├── requirements.txt │ ├── runs │ │ ├── init.py │ │ └── test │ │ ├── model_final.pt │ │ └── model.pt │ ├── scripts.sh │ ├── test.py │ ├── trainer.py │ └── utils │ ├── data_utils.py │ ├── init.py │ └── utils.py

it works well in condition of single-GPU and multi-GPUs

UCPRER commented 1 year ago

Hello, have you solve this problem? I encountered the same problem, my method is to replace the key in 'model_swinvit.pt': 'module' -> 'swinViT'; ',fc' -> 'linear'. But i am not sure if this is the correct method.

fleecedragoon commented 10 months ago

Also had the same problem, this seems to be the way to load them