sunprinceS / MetaASR-CrossAccent

Meta-Learning for End-to-End ASR
MIT License
10 stars 1 forks source link

Fine-tune config #3

Closed sunprinceS closed 4 years ago

sunprinceS commented 4 years ago

During fine-tuning, in general we don't need warmup at first, so which optimizer (maybe Adam, AdamW?) and which learning rate (same as inner-loop learn?) we should use?

sunprinceS commented 4 years ago

Some facts on cross region setting

multi

eval_wer multi-eval_wer-optimizer

best_step multi-optimizer-step

fomaml (with meta batch size 5)

eval_wer fomaml-eval_wer-optimizer png

best_step fomaml-optimizer-step

btw, sgd with 1e-3 lr result is poor, already delete the exps