Closed PatrickNa closed 4 years ago
You load the pre-trained model and replace the last layer with one that suits your output size (number of classes). Something like this:
# Load model from checkpoint
checkpoint = torch.load(model_path)
model.load_state_dict(checkpoint['state_dict'])
# Replace the last layer
model.transposed_conv = nn.ConvTranspose2d(
16,
num_classes,
kernel_size=3,
stride=2,
padding=1,
bias=False)
Then you finetune this model on your own dataset.
Once again. Thank you!
How would you apply transfer learning to a model or continue training with a another dataset that does not consists of all the classes as in the original? E.g. you train the model with the Cityscapes dataset first and then you want to continue with your own dataset that consists of a subset of the classes from Cityscapes?