Open TheBrownViking20 opened 5 years ago
I would be curious if LibriSpeech-trained Jasper would work if you simply create a n-gram language model off your financial data and try to decode with LM.
@borisgin are you sure the learning rate is re-initialized with continue_learning
?
Also, does anyone know if the parameter finetune
in here works ?
@blisc and @borisgin : Hi, it's still me, I was curious about you approach also, that's why I continued from the pretrained-model Jasper on Librispeech, use n-gram language model and beam search on our tax and financial dataset with a small learning rate and increase of number of epochs... I have replaced my dataset in train_params, eval_params, infer_params by our training, dev and test files.
The model improved well on training set, but the validation set is poor. But I tested the pretrained model on the same validation set, it was better than the training I was doing.
My question is this possible to continue from pretrained model on Librispeech beacause it will gain a lot of efforts? (I have a impression is None), if yes, could you please gave us some details about how to train the dataset on pretrained to give the better results? Is there any problem when I changed the dataset and it made change of checkpoints that make the model getting poorly?
Thank you in advance for your answers, any recommendations will help us a lot.
I have some financial data and I want to use transfer learning to fine tune Jasper model it for financial data speech to text capability. Is some method available as a part of the toolkit? If not, then how do I go about doing this?