google / automl

Google Brain AutoML
Apache License 2.0
6.22k stars 1.45k forks source link

Gradual training strategy #798

Open kartik4949 opened 4 years ago

kartik4949 commented 4 years ago

Gradual training strategy i.e training network for few epochs with all layers freezed , next resume training with unfreezing some last layers , finally training all layers for finetuning. can this be helpfull for our network for custom dataset. @mingxingtan @fsx950223

mingxingtan commented 4 years ago

Yeah, sounds like a good (and simple optimization) to speed up the training, but I am not sure about the quality impact.

kartik4949 commented 4 years ago

@mingxingtan quality impact should be slim to none. at least on theory we can try atleast.

ChulanZhang commented 3 years ago

@kartik4949 I am a beginner at ML. May I know the reason for the sequence of your idea? Because I am having an opposite idea of sequence for training. Feel free to correct me if I am wrong.

Intuitively, I think that if I need to finetune the model on a custom dataset: I should first train the whole network without freeze any part of it. This is to make the model get some features of the new classes in the custom dataset. After achieving a decent number, I should freeze part of the network, for example, the 'efficientnet' backbone part. And then continue the training to the end.

I can find "var_freeze_expr: '(efficientnet|fpn_cells|resample_p6)'" in the tutorial. And may I know all the available options for 'var_freeze_expr'?