Open YiLiangNie opened 6 years ago
All you need to do is add "pretrain": "/path/to/the/pretrained/model"
in the system
section of the configuration file. The code will load the pretrained file before it starts training.
I just saw your another post.
The size match happens in the convolution layers for tl_heats
and br_heats
. The sizes of weights and biases in the pretrained model are [80, 256, 1, 1]
and [80]
, which do not match your new dataset.
To prepare the model for fine-tuning on a different datasets, you can do the followings.
train.py
, add nnet.save_params(0)
after line 114. This forces the code to save a randomly initialized model with correct sizes before the training starts. After the model is saved, terminate the code by Ctrl + c.torch.load
. After loading, the parameters can be replaced by newmodel['param_name'] = oldmodel['param_name']
. The parameter names you are looking for are:
nnet.save_params(0)
in step 1."pretrain": "/path/to/the/pretrained/model"
in the system section of the configuration file.Hi heilaw, I find this function in /models/CornerNet.py, and I want to know what does the n=5 means? thank you.
class model(kp): def init(self, db): n = 5 dims = [256, 256, 384, 384, 384, 512] modules = [2, 2, 2, 2, 2, 4] out_dim = 80 super(model, self).init( n, 2, dims, modules, out_dim, make_tl_layer=make_tl_layer, make_br_layer=make_br_layer, make_pool_layer=make_pool_layer, make_hg_layer=make_hg_layer, kp_layer=residual, cnv_dim=256 )
The number of downsamplings in each hourglass module.
Hi heilaw, If I want to change the project to python2.7, what files need to be modified? thx!
How to fine tune your dataset on your trained coco model? Could you provide the json file for fine-tunning params and update the load_pretrained_params() function in file : ./nnet/py_factory.py?