BPYap / Cut-Paste-Consistency

[WACV 2023] Cut-Paste Consistency Learning for Semi-Supervised Lesion Segmentation
https://arxiv.org/abs/2210.00191
0 stars 1 forks source link

(real_images, real_masks), (synth_images, synth_masks, backgrounds) = batch ValueError: not enough values to unpack (expected 2, got 1) #2

Open solayman-cs opened 2 years ago

solayman-cs commented 2 years ago

When the training started, I got this error. Could you please help to fix that? Thanks

BPYap commented 2 years ago

Hi, can you share how you were calling the script? The two examples provided in the README page should work and have been tested on Linux and Windows machines.

solayman-cs commented 2 years ago

I have run the following script in Window (Anaconda):

python main.py ich-semi unet-cutmix --unlabeled_weight 0.01 --mean_teacher --base_ema 0.996 --seed 42 --num_workers 0 --batch_size 2 --labeled_split 0.7 --data_dir "data/CT-ICH/data/fold-1" --default_root_dir "model/ich" --gpus [0] --max_epochs 50 --check_val_every_n_epoch -1 --early_stopping_patience -1 --log_every_n_steps 1 --learning_rate 3e-5 --warmup_epochs 10 --optimizer adamw --weight_decay 1e-5 --lr_scheduler cosine --num_layers 5 --features_start 64 --input_channels 1 --preprocess resize --size 512 --inference_mode resize --do_train --do_test --disable_aupr --num_sanity_val_steps 0 --pos_weight 7.08

solayman-cs commented 2 years ago

Error Details: File "C:\Users\Solayman\Downloads\Semi-Supervised\Cut-Paste-Consistency\cpc\model\unet_cp.py", line 40, in training_step (real_images, real_masks), (synth_images, synth_masks, backgrounds) = batch ValueError: not enough values to unpack (expected 2, got 1)

BPYap commented 2 years ago

The batch size (--batch_size 2) is too small for the default batch split of 0.4, causing the unlabeled batch to be empty. You can either change the batch split to 0.5 (i.e., adding --batch_split 0.5 to the command) or increase the batch size to overcome this error.

solayman-cs commented 2 years ago

Hello, Thank you so much for your help. My issues have been solved. I have another query if you don't mind. When I run the code with "cutmix" (ICH data), for testing I got only f1: 0.18, jaccard: 0.10 values. Any specific reason why I didn't able to reproduce the article's result?

Details: ich-semi, unet-cutmix, fold1, label-spilit 0.7

BPYap commented 2 years ago

For reproducing the CutMix results on ICH dataset, the most important hyperparameters are the batch size and unlabeled weight. Here's the complete command to reproduce the experiment:

python main.py \
    ich-semi \
    unet-cutmix \
    --unlabeled_weight 0.1 \
    --mean_teacher \
    --base_ema 0.996 \
    --seed 42 \
    --num_workers 5 \
    --batch_size 8 \
    --labeled_split 0.7 \
    --batch_split 0.4 \
    --data_dir "data/CT-ICH/data/fold-1" \
    --default_root_dir "model/ich" \
    --gpus [0] \
    --max_epochs 50 \
    --check_val_every_n_epoch -1 \
    --early_stopping_patience -1 \
    --log_every_n_steps 10 \
    --learning_rate 3e-5 \
    --warmup_epochs 10 \
    --optimizer adamw \
    --weight_decay 1e-5 \
    --lr_scheduler cosine \
    --num_layers 5 \
    --features_start 64 \
    --input_channels 1 \
    --preprocess resize \
    --size 512 \
    --inference_mode resize \
    --inference_size 512 \
    --do_train \
    --do_test \
    --disable_aupr \
    --num_sanity_val_steps 0 \
    --pos_weight 7.08

Other factors that might affect the reproduction results include the versions of PyTorch and PyTorch Lightning. In this project, the recommended versions for PyTorch and PyTorch Lightning are 1.9.0 and 1.4.2, respectively. Hope this helps.