Open busishengui opened 4 years ago
@busishengui Did you ever resolve this issue? Having similar issues on a different dataset.
@mitchelldehaven No,I have not solved this problem. Which dataset do you use? Is it WSJ?
This issue will happen when your network gets stuck at a local optimal that tends to predict silence at each point. You can tune your network more carefully or introduce some curriculum learning methods such as train from short to long unterances. I've heard people report similar issues on Librispeech and then got it solved by training on short utterances first and going to longer ones afterwards. Another cause would be the small leaky paths introduced in the numerator graphs. We are going to work it out by doing caculation on the log-domain. There is an temporary version at: https://github.com/YiwenShaoStephen/pychain/pull/10. It does computation on the CPU so it's not that fast now.
@YiwenShaoStephen Thank you very much for your reply. I'll give it a try.
@YiwenShaoStephen You deleted Chainloss function in loss.py,but what is the new loss function?
@YiwenShaoStephen I have tried the three methods you told me, but no matter on the training set or the verification set, the loss function does not converge, and still can not identify the correct results. Do you have any other way?
In dataset.py ,a notation about variable graph is 'if self.train: # only training data has fst (graph)',which means valid and test do not need have variable graph,but in the train.py,the valid mode defines loss = criterion(outputs, output_lengths, graphs),when I use valid data ,the error is "raise Exception("An empty graph encountered!") Exception: An empty graph encountered!"
@cocowf The training/valid graphs are generated by composing the transcription with denominator.fst. However, the denominator.fst is estimated on the training data only so you would probably have empty numerator fst when you compose the validation/test transcript with denominator.fst. A quick solution is to skip utterance with empty graphs like here: https://github.com/YiwenShaoStephen/pychain_example/blob/master/dataset.py#L146.
Thanks Yiwen,there is another question. you mean is that valid/test do not have attribution sample['graph'],but in loss function ,criterion(outputs, output_lengths, graphs).when we skip empty graph in valid/test,how does valid loss work?
@cocowf The training/valid graphs are generated by composing the transcription with denominator.fst. However, the denominator.fst is estimated on the training data only so you would probably have empty numerator fst when you compose the validation/test transcript with denominator.fst. A quick solution is to skip utterance with empty graphs like here: https://github.com/YiwenShaoStephen/pychain_example/blob/master/dataset.py#L146.
@cocowf By skipping, it means you will skip this utterance (with an empty graph) when you form a minibatch so that all the utterance in that minibatch will have a non-empty graph.
by skipping the utterance with empty graph, it means minibatch is non-empty graph?
Yes, all the utterance within the minibatch will have non-empty graphs.
"An empty graph encountered!" occured before skipping the empty graph ,because of graph = ChainGraph(fst, log_domain=True),raise Exception in pychain/graph.py.
Oh yes, that's due to the changes introduced in pychain code for its usage in Espresso. You can refer to this thread: https://github.com/YiwenShaoStephen/pychain_example/issues/5 and temporarily comment out this line in pychain: https://github.com/YiwenShaoStephen/pychain/blob/master/pychain/graph.py#L69
I was doubt ,before skipping why a little valid set has non-empty graph.but all empty graph
@busishengui Did you ever resolve this issue? Having similar issues on a different dataset.
Did you ever use this pychain in different dataset ,such as mandarin,how did files related ro language model generate?
我采用miniLibrispeech作为训练和测试语音数据集进行测试,使用example中的tdnn作为训练模型,整个run的流程并未报错,但是最终wer结果为100%。 $WER 100.00%[20138/20138,0 ins, 20138 del, 0sub]