作者你好!我对你们的工作很感兴趣,但是我在复现的时候发现了一些问题希望你能够帮助我解决。在下载了readme中的数据集后进行了数据的预处理 nnUNet_plan_and_preprocess -t 1 (此处我将ACDC数据集作为001 EM数据集作为002 Synapse作为003 ISIC作为004) 但在使用train_or_test.h进行训练时 只有ACDC和ISIC数据集能够进行训练 其他数据集在训练时均无法进入第二个epoch中,想请教作者一下这个问题
`
unet2022_synapse_224
###############################################
Task name: Task003_Synapse
My trainer class is: <class 'nnunet.training.network_training.nnUNetTrainerV2_unet2022_synapse_224.nnUNetTrainerV2_unet2022_synapse_224'>
For that I will be using the following configuration:
I am using batch dice + CE loss
I am using data from this folder: /home/share/ZYY/Task/nnUNet_preprocessed/Task003_Synapse/nnUNetData_plans_v2.1_2D
###############################################
2022-12-03 23:02:26.020370: Hyper_parameters: base_learning_rate: 0.0001
batch_size: 16
blocks_num:
作者你好!我对你们的工作很感兴趣,但是我在复现的时候发现了一些问题希望你能够帮助我解决。在下载了readme中的数据集后进行了数据的预处理 nnUNet_plan_and_preprocess -t 1 (此处我将ACDC数据集作为001 EM数据集作为002 Synapse作为003 ISIC作为004) 但在使用train_or_test.h进行训练时 只有ACDC和ISIC数据集能够进行训练 其他数据集在训练时均无法进入第二个epoch中,想请教作者一下这个问题 ` unet2022_synapse_224 ############################################### Task name: Task003_Synapse My trainer class is: <class 'nnunet.training.network_training.nnUNetTrainerV2_unet2022_synapse_224.nnUNetTrainerV2_unet2022_synapse_224'> For that I will be using the following configuration: I am using batch dice + CE loss
I am using data from this folder: /home/share/ZYY/Task/nnUNet_preprocessed/Task003_Synapse/nnUNetData_plans_v2.1_2D ############################################### 2022-12-03 23:02:26.020370: Hyper_parameters: base_learning_rate: 0.0001 batch_size: 16 blocks_num:
3 3 3 3 convolution_stem_down: 4 crop_size: 224 224 epochs_num: 300 model_size: Base val_eval_criterion_alpha: 0.0 window_size: 7 7 14 7 2022-12-03 23:02:26.183795: seed: 42 loading dataset 2022-12-03 23:02:26.339212: This split has 2211 training and 1568 validation cases. 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2211/2211 [00:00<00:00, 2216.51it/s] 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1568/1568 [00:00<00:00, 3100.59it/s] unpacking dataset done /home/amax/anaconda3/envs/torch1.11/lib/python3.9/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1659484809662/work/aten/src/ATen/native/TensorShape.cpp:2894.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] model_down.layers.0.blocks.0.gamma model_down.layers.0.blocks.0.dwconv.weight model_down.layers.0.blocks.0.dwconv.bias model_down.layers.0.blocks.0.norm.weight model_down.layers.0.blocks.0.norm.bias model_down.layers.0.blocks.0.pwconv1.weight model_down.layers.0.blocks.0.pwconv1.bias model_down.layers.0.blocks.0.pwconv2.weight model_down.layers.0.blocks.0.pwconv2.bias model_down.layers.0.blocks.1.gamma model_down.layers.0.blocks.1.dwconv.weight model_down.layers.0.blocks.1.dwconv.bias model_down.layers.0.blocks.1.norm.weight model_down.layers.0.blocks.1.norm.bias model_down.layers.0.blocks.1.pwconv1.weight model_down.layers.0.blocks.1.pwconv1.bias model_down.layers.0.blocks.1.pwconv2.weight model_down.layers.0.blocks.1.pwconv2.bias model_down.layers.1.blocks.0.gamma model_down.layers.1.blocks.0.dwconv.weight model_down.layers.1.blocks.0.dwconv.bias model_down.layers.1.blocks.0.norm.weight model_down.layers.1.blocks.0.norm.bias model_down.layers.1.blocks.0.pwconv1.weight model_down.layers.1.blocks.0.pwconv1.bias model_down.layers.1.blocks.0.pwconv2.weight model_down.layers.1.blocks.0.pwconv2.bias model_down.layers.1.blocks.1.gamma model_down.layers.1.blocks.1.dwconv.weight model_down.layers.1.blocks.1.dwconv.bias model_down.layers.1.blocks.1.norm.weight model_down.layers.1.blocks.1.norm.bias model_down.layers.1.blocks.1.pwconv1.weight model_down.layers.1.blocks.1.pwconv1.bias model_down.layers.1.blocks.1.pwconv2.weight model_down.layers.1.blocks.1.pwconv2.bias model_down.layers.2.blocks.0.gamma model_down.layers.2.blocks.0.dwconv.weight model_down.layers.2.blocks.0.dwconv.bias model_down.layers.2.blocks.0.norm.weight model_down.layers.2.blocks.0.norm.bias model_down.layers.2.blocks.0.pwconv1.weight model_down.layers.2.blocks.0.pwconv1.bias model_down.layers.2.blocks.0.pwconv2.weight model_down.layers.2.blocks.0.pwconv2.bias model_down.layers.2.blocks.1.gamma model_down.layers.2.blocks.1.dwconv.weight model_down.layers.2.blocks.1.dwconv.bias model_down.layers.2.blocks.1.norm.weight model_down.layers.2.blocks.1.norm.bias model_down.layers.2.blocks.1.pwconv1.weight model_down.layers.2.blocks.1.pwconv1.bias model_down.layers.2.blocks.1.pwconv2.weight model_down.layers.2.blocks.1.pwconv2.bias model_down.layers.3.blocks.0.gamma model_down.layers.3.blocks.0.dwconv.weight model_down.layers.3.blocks.0.dwconv.bias model_down.layers.3.blocks.0.norm.weight model_down.layers.3.blocks.0.norm.bias model_down.layers.3.blocks.0.pwconv1.weight model_down.layers.3.blocks.0.pwconv1.bias model_down.layers.3.blocks.0.pwconv2.weight model_down.layers.3.blocks.0.pwconv2.bias model_down.layers.3.blocks.1.gamma model_down.layers.3.blocks.1.dwconv.weight model_down.layers.3.blocks.1.dwconv.bias model_down.layers.3.blocks.1.norm.weight model_down.layers.3.blocks.1.norm.bias model_down.layers.3.blocks.1.pwconv1.weight model_down.layers.3.blocks.1.pwconv1.bias model_down.layers.3.blocks.1.pwconv2.weight model_down.layers.3.blocks.1.pwconv2.bias decoder.layers.0.blocks.0.gamma decoder.layers.0.blocks.0.dwconv.weight decoder.layers.0.blocks.0.dwconv.bias decoder.layers.0.blocks.0.norm.weight decoder.layers.0.blocks.0.norm.bias decoder.layers.0.blocks.0.pwconv1.weight decoder.layers.0.blocks.0.pwconv1.bias decoder.layers.0.blocks.0.pwconv2.weight decoder.layers.0.blocks.0.pwconv2.bias decoder.layers.0.blocks.1.gamma decoder.layers.0.blocks.1.dwconv.weight decoder.layers.0.blocks.1.dwconv.bias decoder.layers.0.blocks.1.norm.weight decoder.layers.0.blocks.1.norm.bias decoder.layers.0.blocks.1.pwconv1.weight decoder.layers.0.blocks.1.pwconv1.bias decoder.layers.0.blocks.1.pwconv2.weight decoder.layers.0.blocks.1.pwconv2.bias decoder.layers.1.blocks.0.gamma decoder.layers.1.blocks.0.dwconv.weight decoder.layers.1.blocks.0.dwconv.bias decoder.layers.1.blocks.0.norm.weight decoder.layers.1.blocks.0.norm.bias decoder.layers.1.blocks.0.pwconv1.weight decoder.layers.1.blocks.0.pwconv1.bias decoder.layers.1.blocks.0.pwconv2.weight decoder.layers.1.blocks.0.pwconv2.bias decoder.layers.1.blocks.1.gamma decoder.layers.1.blocks.1.dwconv.weight decoder.layers.1.blocks.1.dwconv.bias decoder.layers.1.blocks.1.norm.weight decoder.layers.1.blocks.1.norm.bias decoder.layers.1.blocks.1.pwconv1.weight decoder.layers.1.blocks.1.pwconv1.bias decoder.layers.1.blocks.1.pwconv2.weight decoder.layers.1.blocks.1.pwconv2.bias decoder.layers.2.blocks.0.gamma decoder.layers.2.blocks.0.dwconv.weight decoder.layers.2.blocks.0.dwconv.bias decoder.layers.2.blocks.0.norm.weight decoder.layers.2.blocks.0.norm.bias decoder.layers.2.blocks.0.pwconv1.weight decoder.layers.2.blocks.0.pwconv1.bias decoder.layers.2.blocks.0.pwconv2.weight decoder.layers.2.blocks.0.pwconv2.bias decoder.layers.2.blocks.1.gamma decoder.layers.2.blocks.1.dwconv.weight decoder.layers.2.blocks.1.dwconv.bias decoder.layers.2.blocks.1.norm.weight decoder.layers.2.blocks.1.norm.bias decoder.layers.2.blocks.1.pwconv1.weight decoder.layers.2.blocks.1.pwconv1.bias decoder.layers.2.blocks.1.pwconv2.weight decoder.layers.2.blocks.1.pwconv2.bias Successfully load the weight above! I am using the pre_trained weight!! Total params: 72.93M 2022-12-03 23:02:36.428531: Unable to plot network architecture: 2022-12-03 23:02:36.428705: No module named 'hiddenlayer' 2022-12-03 23:02:36.428770: printing the network instead:
2022-12-03 23:02:36.428819: unet2022( (model_down): encoder( (patch_embed): PatchEmbed( (project_block): ModuleList( (0): project( (conv1): Conv2d(1, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (activate): GELU(approximate=none) (norm1): LayerNorm((64,), eps=1e-05, elementwise_affine=True) (norm2): LayerNorm((64,), eps=1e-05, elementwise_affine=True) ) (1): project( (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (activate): GELU(approximate=none) (norm1): LayerNorm((128,), eps=1e-05, elementwise_affine=True) ) ) (norm): LayerNorm((128,), eps=1e-05, elementwise_affine=True) ) (pos_drop): Dropout(p=0.0, inplace=False) (layers): ModuleList( (0): BasicLayer( (blocks): ModuleList( (0): Block( (dwconv): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=128) (norm): LayerNorm() (pwconv1): Linear(in_features=128, out_features=512, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=512, out_features=128, bias=True) (drop_path): Identity() (blocks_tr): MSABlock( (norm1): LayerNorm((128,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=128, out_features=384, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=128, out_features=128, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): Identity() (dwconv): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=128) ) ) (1): Block( (dwconv): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=128) (norm): LayerNorm() (pwconv1): Linear(in_features=128, out_features=512, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=512, out_features=128, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((128,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=128, out_features=384, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=128, out_features=128, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=128) ) ) (2): Block( (dwconv): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=128) (norm): LayerNorm() (pwconv1): Linear(in_features=128, out_features=512, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=512, out_features=128, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((128,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=128, out_features=384, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=128, out_features=128, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=128) ) ) ) (downsample): PatchMerging( (reduction): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (norm): LayerNorm((128,), eps=1e-05, elementwise_affine=True) ) ) (1): BasicLayer( (blocks): ModuleList( (0): Block( (dwconv): Conv2d(256, 256, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=256) (norm): LayerNorm() (pwconv1): Linear(in_features=256, out_features=1024, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=1024, out_features=256, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=256, out_features=768, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=256, out_features=256, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(256, 256, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=256) ) ) (1): Block( (dwconv): Conv2d(256, 256, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=256) (norm): LayerNorm() (pwconv1): Linear(in_features=256, out_features=1024, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=1024, out_features=256, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=256, out_features=768, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=256, out_features=256, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(256, 256, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=256) ) ) (2): Block( (dwconv): Conv2d(256, 256, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=256) (norm): LayerNorm() (pwconv1): Linear(in_features=256, out_features=1024, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=1024, out_features=256, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=256, out_features=768, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=256, out_features=256, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(256, 256, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=256) ) ) ) (downsample): PatchMerging( (reduction): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True) ) ) (2): BasicLayer( (blocks): ModuleList( (0): Block( (dwconv): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512) (norm): LayerNorm() (pwconv1): Linear(in_features=512, out_features=2048, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=2048, out_features=512, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=512, out_features=1536, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=512, out_features=512, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512) ) ) (1): Block( (dwconv): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512) (norm): LayerNorm() (pwconv1): Linear(in_features=512, out_features=2048, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=2048, out_features=512, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=512, out_features=1536, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=512, out_features=512, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512) ) ) (2): Block( (dwconv): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512) (norm): LayerNorm() (pwconv1): Linear(in_features=512, out_features=2048, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=2048, out_features=512, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=512, out_features=1536, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=512, out_features=512, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512) ) ) ) (downsample): PatchMerging( (reduction): Conv2d(512, 1024, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True) ) ) (3): BasicLayer( (blocks): ModuleList( (0): Block( (dwconv): Conv2d(1024, 1024, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=1024) (norm): LayerNorm() (pwconv1): Linear(in_features=1024, out_features=4096, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=4096, out_features=1024, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=1024, out_features=3072, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=1024, out_features=1024, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(1024, 1024, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=1024) ) ) (1): Block( (dwconv): Conv2d(1024, 1024, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=1024) (norm): LayerNorm() (pwconv1): Linear(in_features=1024, out_features=4096, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=4096, out_features=1024, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=1024, out_features=3072, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=1024, out_features=1024, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(1024, 1024, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=1024) ) ) (2): Block( (dwconv): Conv2d(1024, 1024, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=1024) (norm): LayerNorm() (pwconv1): Linear(in_features=1024, out_features=4096, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=4096, out_features=1024, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=1024, out_features=3072, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=1024, out_features=1024, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(1024, 1024, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=1024) ) ) ) ) ) (norm0): LayerNorm((128,), eps=1e-05, elementwise_affine=True) (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True) (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True) (norm3): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) ) (decoder): decoder( (pos_drop): Dropout(p=0.0, inplace=False) (layers): ModuleList( (0): BasicLayer_up( (blocks): ModuleList( (0): Block( (dwconv): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=128) (norm): LayerNorm() (pwconv1): Linear(in_features=128, out_features=512, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=512, out_features=128, bias=True) (drop_path): Identity() (blocks_tr): MSABlock( (norm1): LayerNorm((128,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=128, out_features=384, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=128, out_features=128, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): Identity() (dwconv): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=128) ) ) (1): Block( (dwconv): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=128) (norm): LayerNorm() (pwconv1): Linear(in_features=128, out_features=512, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=512, out_features=128, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((128,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=128, out_features=384, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=128, out_features=128, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=128) ) ) (2): Block( (dwconv): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=128) (norm): LayerNorm() (pwconv1): Linear(in_features=128, out_features=512, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=512, out_features=128, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((128,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=128, out_features=384, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=128, out_features=128, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=128) ) ) ) (Upsample): Patch_Expanding( (norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True) (up): ConvTranspose2d(256, 128, kernel_size=(2, 2), stride=(2, 2)) ) ) (1): BasicLayer_up( (blocks): ModuleList( (0): Block( (dwconv): Conv2d(256, 256, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=256) (norm): LayerNorm() (pwconv1): Linear(in_features=256, out_features=1024, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=1024, out_features=256, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=256, out_features=768, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=256, out_features=256, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(256, 256, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=256) ) ) (1): Block( (dwconv): Conv2d(256, 256, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=256) (norm): LayerNorm() (pwconv1): Linear(in_features=256, out_features=1024, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=1024, out_features=256, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=256, out_features=768, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=256, out_features=256, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(256, 256, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=256) ) ) (2): Block( (dwconv): Conv2d(256, 256, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=256) (norm): LayerNorm() (pwconv1): Linear(in_features=256, out_features=1024, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=1024, out_features=256, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=256, out_features=768, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=256, out_features=256, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(256, 256, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=256) ) ) ) (Upsample): Patch_Expanding( (norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True) (up): ConvTranspose2d(512, 256, kernel_size=(2, 2), stride=(2, 2)) ) ) (2): BasicLayer_up( (blocks): ModuleList( (0): Block( (dwconv): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512) (norm): LayerNorm() (pwconv1): Linear(in_features=512, out_features=2048, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=2048, out_features=512, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=512, out_features=1536, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=512, out_features=512, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512) ) ) (1): Block( (dwconv): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512) (norm): LayerNorm() (pwconv1): Linear(in_features=512, out_features=2048, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=2048, out_features=512, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=512, out_features=1536, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=512, out_features=512, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512) ) ) (2): Block( (dwconv): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512) (norm): LayerNorm() (pwconv1): Linear(in_features=512, out_features=2048, bias=True) (act): GELU(approximate=none) (pwconv2): Linear(in_features=2048, out_features=512, bias=True) (drop_path): DropPath() (blocks_tr): MSABlock( (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=512, out_features=1536, bias=True) (attn_drop): Dropout(p=0, inplace=False) (proj): Linear(in_features=512, out_features=512, bias=True) (proj_drop): Dropout(p=0, inplace=False) (softmax): Softmax(dim=-1) ) (drop_path): DropPath() (dwconv): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512) ) ) ) (Upsample): Patch_Expanding( (norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (up): ConvTranspose2d(1024, 512, kernel_size=(2, 2), stride=(2, 2)) ) ) ) ) (final): ModuleList( (0): final_patch_expanding( (project_block): ModuleList() (up_final): ConvTranspose2d(128, 9, kernel_size=(4, 4), stride=(4, 4)) ) (1): final_patch_expanding( (project_block): ModuleList() (up_final): ConvTranspose2d(256, 9, kernel_size=(4, 4), stride=(4, 4)) ) (2): final_patch_expanding( (project_block): ModuleList() (up_final): ConvTranspose2d(512, 9, kernel_size=(4, 4), stride=(4, 4)) ) ) ) 2022-12-03 23:02:36.435290:
2022-12-03 23:02:36.435495: epoch: 0 Epoch 1/300: : 139it [00:50, 2.78it/s, loss=0.0466] 2022-12-03 23:03:26.458423: train loss : 0.5275 84145it [3:09:48, 7.26it/s]
` 上述代码是在Synapse_224中训练的情况,Synapse_330和EM数据集均有相同结果,希望作者能够解惑,感谢!