KatherLab / HIA

Histopathology Image Analysis
90 stars 16 forks source link

Missing ViT implementation #6

Closed butkej closed 1 year ago

butkej commented 1 year ago

As per your published MedIA journal paper "Benchmarking weakly-supervised deep learning pipelines for whole slide classification in computational pathology" I wanted to check out the different pipeline methods. However, Vision Transformers are not included in this repository as it seems. Will they be integrated in the future?

jieruyao49 commented 1 year ago

I have meet same problem. Have you solved it?

butkej commented 1 year ago

Yeah, I was stupid and didn't look correctly. Pretrained ViT is integrated all right in the Classical Workflow. See Classic_Training.py -> utils.Initialize_model -> elif model_name == "vit": model_ft = ViT('B_32_imagenet1k', pretrained = True) Set_parameter_requires_grad(model_ft, feature_extract) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, num_classes) input_size = 384

jieruyao49 commented 1 year ago

Yeah, I was stupid and didn't look correctly. Pretrained ViT is integrated all right in the Classical Workflow. See Classic_Training.py -> utils.Initialize_model -> elif model_name == "vit": model_ft = ViT('B_32_imagenet1k', pretrained = True) Set_parameter_requires_grad(model_ft, feature_extract) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, num_classes) input_size = 384

Thank you for your reply. I have some other questions about how to train this code. When I run Main.py, I need to set '-- adressExpress'. Where should I get "DACHS_MIL_TRAINFULL_Early stopFalse. txt"?