YijinHuang / pytorch-classification

A general, feasible, and extensible framework for classification tasks.
MIT License
58 stars 18 forks source link

Evaluation of pretrained models in Lesion-based CL #10

Closed DerrickGuu closed 4 months ago

DerrickGuu commented 6 months ago

Hi, I'm really interested in your work, however, I came across some issue while using this repository for evaluation on pre-trained models in Lesion-based CL. In the instruction of how to reproduce the experimental results on diabetic retinopathy grading, it said that: "To fine-tune pretrained models, treat the pretrained weights as checkpoints by updating the item "checkpoint" in ~/configs/eyepacs.yaml." My understanding is that I just have to modify the "checkpoint" item in ~/configs/eyepacs.yaml like this:

checkpoint: /root/Work/Retina_Seg/pytorch-classification/configs/final_model.pt # load weights from other pretrained model

After I trained the Lesion-based CL model, save the training weights and updated the "checkpoint" item in ~/configs/eyepacs.yaml , I run 'main.py', the error message occurred:

AttributeError: Can't get attribute 'ContrastiveModel' on <module 'modules' (<_frozen_importlib_external.NamespaceLoader object at 0x7f85f1d342d0>)>

Also, I changed the pretrained model to the one you provided (ie.resnet50_128_07.pt) and it shows: RuntimeError: Error(s) in loading state_dict for ResNet: Missing key(s) in state_dict: "fc.weight", "fc.bias".

I'm thinking how to resolve the error or are there any other step I have missed to evaluate the pre-trained models in Lesion-based CL? Any help is appreciated!

YijinHuang commented 4 months ago

Sorry for the late reply, and thank you for pointing out this issue. You are correct in your steps.

  1. The "AttributeError" is caused by the pickle mechanism, which means you need to copy the Python script containing the "ContrastiveModel" class to the same folder as "main.py".

  2. To resolve the "Missing key(s)" error, please set strict=False in the load_state_dict function.

If you encounter any other problems, please feel free to reopen this issue.