yassouali / CCT

:page_facing_up: Semi-Supervised Semantic Segmentation with Cross-Consistency Training (CVPR 2020).
https://yassouali.github.io/cct_page/
MIT License
393 stars 57 forks source link

low performance for full supervised setting with 2 gpus #2

Closed jianlong-yuan closed 4 years ago

jianlong-yuan commented 4 years ago

With 2 v100 / 16G, low performance I use syncbn image

jianlong-yuan commented 4 years ago

"name": "CCT", "experim_name": "CCT", "n_gpu": 2, "n_labeled_examples": 1464, "diff_lrs": true, "ramp_up": 0.1, "unsupervised_w": 30, "ignore_index": 255, "lr_scheduler": "Poly", "use_weak_lables":false, "weakly_loss_w": 0.4, "pretrained": true,

yassouali commented 4 years ago

Hi, thank you for your interest in our work,

Did you try it without any synbatch norm ?

jianlong-yuan commented 4 years ago

Yes, i have tried. I got 0.6579999923706055

yassouali commented 4 years ago

Ok, thanks,

I did not really test it on multiple GPUs, let me run some tests and get back to you.

Just to be sure, you are training using 1.5K labeled examples in a semi mode ? @jianlong-yuan

jianlong-yuan commented 4 years ago

No, i just modify config supervised False ==> True { "name": "CCT", "experim_name": "CCT", "n_gpu": 2, "n_labeled_examples": 1464, "diff_lrs": true, "ramp_up": 0.1, "unsupervised_w": 30, "ignore_index": 255, "lr_scheduler": "Poly", "use_weak_lables":false, "weakly_loss_w": 0.4, "pretrained": true,

"model":{
    "supervised": true,
    "semi": false,
    "supervised_w": 1,

    "sup_loss": "CE",
    "un_loss": "MSE",

    "softmax_temp": 1,
    "aux_constraint": false,
    "aux_constraint_w": 1,
    "confidence_masking": false,
    "confidence_th": 0.5,

    "drop": 6,
    "drop_rate": 0.5,
    "spatial": true,

    "cutout": 6,
    "erase": 0.4,

    "vat": 2,
    "xi": 1e-6,
    "eps": 2.0,

    "context_masking": 2,
    "object_masking": 2,
    "feature_drop": 6,

    "feature_noise": 6,
    "uniform_range": 0.3
},

"optimizer": {
    "type": "SGD",
    "args":{
        "lr": 1e-2,
        "weight_decay": 1e-4,
        "momentum": 0.9
    }
},

"train_supervised": {
    "data_dir": "/home/data/segmentation/pascal_voc/",
    "batch_size": 10,
    "crop_size": 320,
    "shuffle": true,
    "base_size": 400,
    "scale": true,
    "augment": true,
    "flip": true,
    "rotate": false,
    "blur": false,
    "split": "train_supervised",
    "num_workers": 8
},

"train_unsupervised": {
    "data_dir": "/home/gongyuan.yjl/data/segmentation/pascal_voc/",
    "weak_labels_output": "pseudo_labels/result/pseudo_labels",
    "batch_size": 10,
    "crop_size": 320,
    "shuffle": true,
    "base_size": 400,
    "scale": true,
    "augment": true,
    "flip": true,
    "rotate": false,
    "blur": false,
    "split": "train_unsupervised",
    "num_workers": 8
},

"val_loader": {
    "data_dir": "/home/data/segmentation/pascal_voc/",
    "batch_size": 1,
    "val": true,
    "split": "val",
    "shuffle": false,
    "num_workers": 4
},

"trainer": {
    "epochs": 80,
    "save_dir": "saved/",
    "save_period": 5,

    "monitor": "max Mean_IoU",
    "early_stop": 10,

    "tensorboardX": true,
    "log_dir": "saved/",
    "log_per_iter": 20,

    "val": true,
    "val_per_epochs": 5
}

}

yassouali commented 4 years ago

Hi, I just run on 2 GPUs for semi mode with 1.5K labels and got 68.9 (before the end of training - 60 epochs out of 80, it'll go up if I continued) using the provided config and 2 GPUs, but I see that you want the supervised mode, are you training on the supervised mode using only 1.5K labels or did you also add all the id in the .txt file?

if you are training on 1.5K labeled examples only, the results are correct I think, but if you are using the full labeled set (10K) than can you say for how much epochs are you training? the full 80 epochs?

jianlong-yuan commented 4 years ago

Got it. I used 1.5k labeled, and full 80 epochs, i got best model at about 60 epoch. image I found the tabel in the paper: <CCT 1.5k - 69.4> is with 1.5k supervised and 9k for semi supervised?

yassouali commented 4 years ago

yes - CCT always refers to semi-supervised mode, the supervised mode is only there as a baseline to make sure we have better performances than the supervised mode.

So CCT - 1.5K refers to using 1.5K labeled examples (using cross-entropy loss) and the rest 9K is used in the semi-supervised loss (Lu in the paper), to get the exact results, simply run the provided config file (note that one GPU works better due to batch norm). Now when you use supervised mode == you only train using cross-entropy loss, so the results will be 64-65, because you are not using Lu loss, only normal training.

Now for the 73.2 - see section 3 where we use weak labels == 1.5K of images with segmentations maps - 9K with class labels. Hope things are clear now.

jianlong-yuan commented 4 years ago

Thank you. Got it.

chuxiang93 commented 4 years ago

Hi @yassouali , Did you do the experiment with only 1.5K labeled data for the supervised training? I didn't see the relevant data results in this table. image Thank you,

yassouali commented 4 years ago

@mjq93 Hi, thank you for your interest, Yes, you are right, the supervised case for 1.5K labels was not reported, I don't quite remember the exact value, but I think it was around 66.