MStarmans91 / WORC

Workflow for Optimal Radiomics Classification
Other
69 stars 19 forks source link

Evaluate two or more test sets individually #80

Open lyhyl opened 1 year ago

lyhyl commented 1 year ago

Is it possible to evaluate two or more test sets individually? Train on A, test on B, C, D, ... and output corresponding performances (perf-B, perf-C, perf-D, ...)? I have tried to set images_test and segmentations_test as follow:

experiment.images_train.append(A_img_train)
experiment.segmentations_train.append(A_seg_train)

experiment.images_test.append(A_img_test)
experiment.segmentations_test.append(A_seg_test)
experiment.images_test.append(B_img)
experiment.segmentations_test.append(B_test)

...

experiment.add_evaluation()

experiment.set_multicore_execution()
experiment.execute()

But only one performance/evaluation is outputed. In addition, I have checked estimator_all_0.hdf5. It seems that only A_img_test and A_seg_test are used in testing phase.

MStarmans91 commented 1 year ago

Currently, WORC only supports specifying one test set only. Appending multiple objects to the images_test object is intended when using multiple images per sample / patient, e.g. a T1-weighted MRI and a T2-weighted MRI.

I am working on an Inference workflow, where you can provide a trained model and test it on another dataset, which would also support your use case. For now, you will just have to run multiple experiments, one for each test set. There will be a model trained per experiment, so that costs you some extra time, but evaluation on each test set will be similar then if we would do this all in one experiment.