Open bertsky opened 4 years ago
Thank you for your kind words!
And sorry for just seeing your issue. Sure, let me write a tutorial or build some exemplar code for the pre-training. Basically you just need to load any pre-trained weights we've provided, and repeat the training process.
Oh, that would be great – thanks in advance!
I expect just initializing with your pre-trained models and training on new data would quickly make the model forget your large and broad initial dataset, because initial gradients will be large. Apart from layer fixation I also thought about reducing learning rate or imposing restrictive gradient clipping. But I guess I'll have to go through these experiments anyway...
Thanks for sharing your work, it's awesome!
I am eager to train this on my own materials, but they are comparatively scarce and I don't have the computational capacities to train on the whole PubLayNet from scratch myself.
So I was wondering: What changes would be needed to continue training from your pre-trained models? Or even more elaborately, do you think it would be worthwhile trying to load an existing model, fix most of the weights, and add some additional layers to the FPN for fine-tuning?
Same question here! And congratulations on this amazing tool.
Would be great to have such a script/documentation! Thanks a lot!
Oh, that would be great – thanks in advance!
I expect just initializing with your pre-trained models and training on new data would quickly make the model forget your large and broad initial dataset, because initial gradients will be large. Apart from layer fixation I also thought about reducing learning rate or imposing restrictive gradient clipping. But I guess I'll have to go through these experiments anyway...
Any updates on your experiment @bertsky? I intend to something similar and would like to learn from your experience.
@lolipopshock I am trying to fine tune this model on my own custom data, having different classes than what the model was trained on.
Here is what I have done:
ROI_HEADS>NUM_CLASSES
from 5 to 3.This lead to the following warning:
Skip loading parameter 'roi_heads.box_predictor.cls_score.weight' to the model due to incompatible shapes: (6, 1024) in the checkpoint but (4, 1024) in the model! You might want to double check if this is expected. Skip loading parameter 'roi_heads.box_predictor.cls_score.bias' to the model due to incompatible shapes: (6,) in the checkpoint but (4,) in the model! You might want to double check if this is expected. Skip loading parameter 'roi_heads.box_predictor.bbox_pred.weight' to the model due to incompatible shapes: (20, 1024) in the checkpoint but (12, 1024) in the model! You might want to double check if this is expected. Skip loading parameter 'roi_heads.box_predictor.bbox_pred.bias' to the model due to incompatible shapes: (20,) in the checkpoint but (12,) in the model! You might want to double check if this is expected. Skip loading parameter 'roi_heads.mask_head.predictor.weight' to the model due to incompatible shapes: (5, 256, 1, 1) in the checkpoint but (3, 256, 1, 1) in the model! You might want to double check if this is expected. Skip loading parameter 'roi_heads.mask_head.predictor.bias' to the model due to incompatible shapes: (5,) in the checkpoint but (3,) in the model! You might want to double check if this is expected. Some model parameters or buffers are not found in the checkpoint: roi_heads.box_predictor.bbox_pred.{bias, weight}
Now here is what I expect has happened:
ROI_HEADS
aren't loaded and hence these weights will be randomly initialized.Please correct me if I am wrong :)
It would be really great to have a short tutorial on fine tuning on a custom data-set with custom labels starting from a pretrained model.
It would be really great to have a short tutorial on fine tuning on a custom data-set with custom labels starting from a pretrained model.
If I'm successful in fine tuning on a custom dataset, will definitely work towards making a tutorial of the same.
The plan for updating the repo and creating a dedicated fine-tuning tutorial has been unintentionally delayed - will get back to this project very soon in one or two weeks and release the updates. Please stay tuned :)
The plan for updating the repo and creating a dedicated fine-tuning tutorial has been unintentionally delayed - will get back to this project very soon in one or two weeks and release the updates. Please stay tuned :)
Hi! Any updates about a fine-tuning tutorial? I'm looking forward to it!
The plan for updating the repo and creating a dedicated fine-tuning tutorial has been unintentionally delayed - will get back to this project very soon in one or two weeks and release the updates. Please stay tuned :)
Hi! Any updates about a fine-tuning tutorial? I'm looking forward to it!
We recently updated a bunch of stuff to make the repo more flexible, I'll work on creating tutorial as and when I'm free, usually over the weekend.
Hi all, Here's a draft of the tutorial to fine tune models using this repo.
I will close this issue when it is published and update a link to the published version of the tutorial.
Hi all, Here's a draft of the tutorial to fine tune models using this repo.
I will close this issue when it is published and update a link to the published version of the tutorial.
It seems the post is still not publicly available?
Hi all,
Here's a draft of the tutorial to fine tune models using this repo.
I will close this issue when it is published and update a link to the published version of the tutorial.
It seems the post is still not publicly available?
For now, to access the draft you'll need to be logged in to your medium account, once it's published (hopefully in a maximum of 3 days), you'll be able to publicly access it without logging in.
Hi all,
Here's a draft of the tutorial to fine tune models using this repo.
I will close this issue when it is published and update a link to the published version of the tutorial.
It seems the post is still not publicly available?
For now, to access the draft you'll need to be logged in to your medium account, once it's published (hopefully in a maximum of 3 days), you'll be able to publicly access it without logging in.
Thanks - just took a quick look and it looks nice! Would you mind if I also include it in the layout-parser's website as a tutorial for model training in the future? We can talk about the details if you join the slack channel - thanks!
Would love that, I'll join the channel right away.
The tutorial is now live on Towards Data Science.
See #10
Thanks for sharing your work, it's awesome!
I am eager to train this on my own materials, but they are comparatively scarce and I don't have the computational capacities to train on the whole PubLayNet from scratch myself.
So I was wondering: What changes would be needed to continue training from your pre-trained models? Or even more elaborately, do you think it would be worthwhile trying to load an existing model, fix most of the weights, and add some additional layers to the FPN for fine-tuning?