ljwztc / CLIP-Driven-Universal-Model

[ICCV 2023] CLIP-Driven Universal Model; Rank first in MSD Competition.
Other
585 stars 71 forks source link

Questions about BTCV performance of the Table 3 in the paper #26

Closed JiaxinZhuang closed 1 year ago

JiaxinZhuang commented 1 year ago

Thanks for the great jobs. Is it any specific script to reproduce the results of the universal model in Table 3?

ljwztc commented 1 year ago

hi, the table 3 is conducted using 5-cross validation. you need to do 5 times training and validation, and average the results. the dataset split is shown in https://github.com/ljwztc/CLIP-Driven-Universal-Model/blob/main/dataset/BTCV_folds.json.

boxiangyun commented 1 year ago

How to evaluate the results of the test set (img0061~0080)? I couldn't find the annotations. Is it verified online on the official website?

JiaxinZhuang commented 1 year ago

hi, the table 3 is conducted using 5-cross validation. you need to do 5 times training and validation, and average the results. the dataset split is shown in https://github.com/ljwztc/CLIP-Driven-Universal-Model/blob/main/dataset/BTCV_folds.json.

Thanks for reply. In my understanding, the universal model is firstly trained with the assembly datasets, where the data split follows https://github.com/ljwztc/CLIP-Driven-Universal-Model/blob/main/dataset/dataset_list/PAOT_123457891213_train.txt , right? So in order to conduct 5-cross validation and given the BTCV_folds.json, there should be five PAOT_123457891213_train.txt files? Since I find these two files share some data for every validation fold.

ljwztc commented 1 year ago

How to evaluate the results of the test set (img0061~0080)? I couldn't find the annotations. Is it verified online on the official website?

Yes. It should be verified online on the official website.

ljwztc commented 1 year ago

hi, the table 3 is conducted using 5-cross validation. you need to do 5 times training and validation, and average the results. the dataset split is shown in https://github.com/ljwztc/CLIP-Driven-Universal-Model/blob/main/dataset/BTCV_folds.json.

Thanks for reply. In my understanding, the universal model is firstly trained with the assembly datasets, where the data split follows https://github.com/ljwztc/CLIP-Driven-Universal-Model/blob/main/dataset/dataset_list/PAOT_123457891213_train.txt , right? So in order to conduct 5-cross validation and given the BTCV_folds.json, there should be five PAOT_123457891213_train.txt files? Since I find these two files share some data for every validation fold.

Yes. When conducting the BTCV experiment, the pre-training process should exclude the data from BTCV.

boxiangyun commented 1 year ago

But, in the PAOT.txt, you show the list (01_Multi-Atlas_Labeling/label/label0061.nii.gz ~ label0080.nii.gz). Is this an error or did you use data from 61 to 80 during the training phase?

ljwztc commented 1 year ago

The released codebase is for MSD leaderboard main experiment. When train model for BTCV, it should exclude the BTCV data.