MrGiovanni / ModelsGenesis

[MICCAI 2019 Young Scientist Award] [MEDIA 2020 Best Paper Award] Models Genesis
Other
729 stars 139 forks source link

Questions about the experimental details #17

Closed vic85821 closed 4 years ago

vic85821 commented 4 years ago

Hi Zongwei,

Thanks for sharing such a great work. I have tried to reproduce the results shown in the Tab. 2 of the paper, however, I can't get the comparable scores. Could you release the experimental setting (preprocessing, loss functions, optimizer ...) of those experiments (NCC, NCS, ECC, LCS, BMS)?

Thanks for the help.

Best, Yu-Cheng

MrGiovanni commented 4 years ago

Hi Yu-Cheng,

Thank you for your inquiring.

Our pre-trained models show generic visual representation, at least on our 5 target tasks, effective across diseases, organs, datasets, and even modalities. That said, you can freely use it on any other 3D medical applications.

In our work, 4 out of the 5 target applications are examined on the publicly available datasets. You can easily find many online tutorials about how to download data, process the data, and train/test a model. We have adopted some of the most compelling/popular GitHub repositories in our paper.

Meanwhile, we are currently producing an easy-to-use Jupyter Notebook that covering all of the processes spans loading the raw data along the way till testing the model. Will be released soon.

FYI, here is a snapshot of our hyper-parameters when fine-tuning a model.

from keras.callbacks import LambdaCallback,TensorBoard,ReduceLROnPlateau

optimizer = 'adam'
lr = 1e-3
patience = 15
verbose = 1
batch_size = 12
workers = 10
max_queue_size = workers * 2
nb_epoch = 10000

def classification_model_compile(model, config):
    if config.num_classes <= 2:
        model.compile(optimizer=config.optimizer, 
                      loss="binary_crossentropy", 
                      metrics=['accuracy','binary_crossentropy'],
                 )
    else:
        model.compile(optimizer=config.optimizer, 
                      loss="categorical_crossentropy", 
                      metrics=['categorical_accuracy','categorical_crossentropy'],
                 )
    return model

def segmentation_model_compile(model, config):
    model.compile(optimizer=config.optimizer, 
                  loss=dice_coef_loss, 
                  metrics=[mean_iou, 
                           dice_coef],
                 )
    return model

def model_setup(model, config, task=None):
    if task == 'segmentation':
        model = segmentation_model_compile(model, config)
    elif task == 'classification':
        model = classification_model_compile(model, config)
    else:
        raise

    if os.path.exists(os.path.join(config.model_path, config.exp_name+".txt")):
        os.remove(os.path.join(config.model_path, config.exp_name+".txt"))
    with open(os.path.join(config.model_path, config.exp_name+".txt"),'w') as fh:
        model.summary(print_fn=lambda x: fh.write(x + '\n'))

    shutil.rmtree(os.path.join(config.logs_path, config.exp_name), ignore_errors=True)
    if not os.path.exists(os.path.join(config.logs_path, config.exp_name)):
        os.makedirs(os.path.join(config.logs_path, config.exp_name))
    tbCallBack = TensorBoard(log_dir=os.path.join(config.logs_path, config.exp_name),
                             histogram_freq=0,
                             write_graph=True, 
                             write_images=True,
                            )
    tbCallBack.set_model(model)

    early_stopping = keras.callbacks.EarlyStopping(monitor='val_loss', 
                                                   patience=config.patience, 
                                                   verbose=config.verbose,
                                                   mode='min',
                                                  )
    check_point = keras.callbacks.ModelCheckpoint(os.path.join(config.model_path, config.exp_name+".h5"),
                                                  monitor='val_loss', 
                                                  verbose=config.verbose, 
                                                  save_best_only=True, 
                                                  mode='min',
                                                 )
    lrate_scheduler = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=6,
                                        min_delta=0.0001, min_lr=1e-6, verbose=1)
    callbacks = [check_point, early_stopping, tbCallBack, lrate_scheduler]
    return model, callbacks

Best, Zongwei

vic85821 commented 4 years ago

Hi Zongwei,

Thanks for the reply!

Best, Yu-Cheng

hbiserinska commented 4 years ago

Hello Zongwei,

This is the first time I encounter so well structured and easy to read code in GitHub. This is an amazing work! Thank you for sharing. I am currently working with relatively new dataset (LNDb) on lung nodule detection task from CT scans. I will be happy to try your approach but I see that it is either for classification or segmentation and I am not sure if it is applicable for my case. Is there a way I can adapt a ModelsGenesis for my task - Classification + Localization of the nodules?

MrGiovanni commented 4 years ago

Hello @hbiserinska ,

So far we have not adopted Models Genesis on lung nodule (or other diseases) localization yet. It is a challenging task to localize such small diseases in an entire CT scan. My answers in https://github.com/MrGiovanni/ModelsGenesis/issues/9 might be helpful for you.

After disease localization, you can use the pre-trained Models Genesis on the small subvolumes for false-positive reduction and disease classification.

Thank you, Zongwei