Closed JiaxinZhuang closed 1 year ago
hi, the table 3 is conducted using 5-cross validation. you need to do 5 times training and validation, and average the results. the dataset split is shown in https://github.com/ljwztc/CLIP-Driven-Universal-Model/blob/main/dataset/BTCV_folds.json
.
How to evaluate the results of the test set (img0061~0080)? I couldn't find the annotations. Is it verified online on the official website?
hi, the table 3 is conducted using 5-cross validation. you need to do 5 times training and validation, and average the results. the dataset split is shown in
https://github.com/ljwztc/CLIP-Driven-Universal-Model/blob/main/dataset/BTCV_folds.json
.
Thanks for reply. In my understanding, the universal model is firstly trained with the assembly datasets, where the data split follows https://github.com/ljwztc/CLIP-Driven-Universal-Model/blob/main/dataset/dataset_list/PAOT_123457891213_train.txt , right? So in order to conduct 5-cross validation and given the BTCV_folds.json, there should be five PAOT_123457891213_train.txt files? Since I find these two files share some data for every validation fold.
How to evaluate the results of the test set (img0061~0080)? I couldn't find the annotations. Is it verified online on the official website?
Yes. It should be verified online on the official website.
hi, the table 3 is conducted using 5-cross validation. you need to do 5 times training and validation, and average the results. the dataset split is shown in
https://github.com/ljwztc/CLIP-Driven-Universal-Model/blob/main/dataset/BTCV_folds.json
.Thanks for reply. In my understanding, the universal model is firstly trained with the assembly datasets, where the data split follows https://github.com/ljwztc/CLIP-Driven-Universal-Model/blob/main/dataset/dataset_list/PAOT_123457891213_train.txt , right? So in order to conduct 5-cross validation and given the BTCV_folds.json, there should be five PAOT_123457891213_train.txt files? Since I find these two files share some data for every validation fold.
Yes. When conducting the BTCV experiment, the pre-training process should exclude the data from BTCV.
But, in the PAOT.txt, you show the list (01_Multi-Atlas_Labeling/label/label0061.nii.gz ~ label0080.nii.gz). Is this an error or did you use data from 61 to 80 during the training phase?
The released codebase is for MSD leaderboard main experiment. When train model for BTCV, it should exclude the BTCV data.
Thanks for the great jobs. Is it any specific script to reproduce the results of the universal model in Table 3?