Closed niddal-imam closed 5 years ago
Hi,
load the pre-trained model use the same net as the pre-trained model
crnn = net.CRNN(params.imgH, params.nc, nclass, params.nh)
here, the nclass should't equal to len(params.alphabet) + 1
, it should be the classed number of pre-trained model.
change the last layer to yourself.
crnn.rnn = nn.Sequential(
BidirectionalLSTM(512, nh, nh),
BidirectionalLSTM(nh, nh, nclass))
nclass = len(params.alphabet) + 1
Thanks for the quick response.
Now I should first load the pre-trained model by changing the params.py
pretrained = 'path/to/my/pre-trained'
But I did not understand the second point. What should I change?
Thanks
After load the model, change the rnn layer.
Thank you Holmeyoung. Should I change crnn.py:
self.cnn = cnn self.rnn = nn.Sequential( BidirectionalLSTM(512, nh, nh), BidirectionalLSTM(nh, nh, nclass))
to
crnn.rnn = nn.Sequential( BidirectionalLSTM(512, nh, nh), BidirectionalLSTM(nh, nh, nclass))
?
Hi, you should change train.py to
import torch.nn as nn
from models.crnn import BidirectionalLSTM
def net_init():
nclass_pre = 11 # the nclass of your pre-trained model: = len(params.alphabet--pre version) + 1
nclass = len(params.alphabet) + 1
crnn = net.CRNN(params.imgH, params.nc, nclass_pre, params.nh)
crnn.apply(weights_init)
if params.pretrained != '':
print('loading pretrained model from %s' % params.pretrained)
if params.multi_gpu:
crnn = torch.nn.DataParallel(crnn)
crnn.load_state_dict(torch.load(params.pretrained))
crnn.rnn = nn.Sequential(
BidirectionalLSTM(512, params.nh, params.nh),
BidirectionalLSTM(params.nh, params.nh, nclass))
return crnn
Thank you very much.
Hi Holmeyoung,
When I retrain a pre-trained model, it sounds like the model forgot what it has learned. I mean if I trained a model on synthetic images and fine-tuned the model with real-world images, the model accuracy on the synthetic images decreases. Please correct me if I am wrong, to use a pertained model, I should freeze the last layers. If so, how can I freeze the last layers?
Thanks