WongKinYiu / yolov7

Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
GNU General Public License v3.0
13.34k stars 4.2k forks source link

what is Transfer learning vs reparameterization ? when to use what #231

Closed akashAD98 closed 2 years ago

akashAD98 commented 2 years ago

Transfer learning consists of taking features learned on one problem, and leveraging them on a new, similar problem. For instance, features from a model that has learned to identify racoons may be useful to kick-start a model meant to identify tanukis. Transfer learning is usually done for tasks where your dataset has too little data to train a full-scale model from scratch. The most common incarnation of transfer learning in the context of deep learning is the following worfklow:

1.Take layers from a previously trained model.

2.Freeze them, so as to avoid destroying any of the information they contain during future training rounds.

3.Add some new, trainable layers on top of the frozen layers. They will learn to turn the old features into predictions on a new dataset.

4.Train` the new layers on your dataset.

fine-tuning, which consists of unfreezing the entire model you obtained above (or part of it), and re-training it on the new data with a very low learning rate. This can potentially achieve meaningful improvements, by incrementally adapting the pretrained features to the new data.

akashAD98 commented 2 years ago

@WongKinYiu reparameterization is nothing but fine tunning/training from scratch,I'm correct?

& please give example of when to use what,so people will not confuse.

WongKinYiu commented 2 years ago

Reparameterization is used to reduce trainable BoF modules into deploy model for fast inference. For example merge BN to conv, merge YOLOR to conv, ... Transfer learning using model pretrained on large dataset and finetune on small dataset. They share different concept.

Examples are in readme.

akashAD98 commented 2 years ago

@WongKinYiu thanks helpful