Closed xiaoshimeng closed 10 months ago
Could you please show me the log file? I haven't encoutered this issue before.
I have uploaded the log.txt file, but there are only three contents of epoch in it. At that time, WER and loss were always very large, so the training was terminated. I hope you can give me your advice.
I notice that your feeder mode is 'test'. I haven't observed other differences but there may exist some. I also upload my log.txt which you can refer to. You can choose to reduce the lr by half from 0.0001 to 0.00005, and change the lr deacying rate (gamma in the 'optimizer.py') from 0.2 to 0.5.
Thank you very much.
Hello, if I want to train CorrNet on CSL-Daily, do I need to change the model structure as I did in Readme to evaluate CSL-Daily?
I set CorrNet like that to save meomory. If you have enough memory, you don't need to change the network structure.
In your paper, does CSL-Daily WER modify the network structure or does it come out without modifying the network structure?
The WER is reported with only two proposed modules (the modified network structure).
What I mean is that the original two modules have been modified as shown in the picture above.
Originally, CorrNet has three modules. For CSL-Daily, we only keep two due to memory constraint and report its WER in the paper. You can try using all three module if the memory is enough.
I see. Thank you very much for your answer.
I notice that your feeder mode is 'test'. I haven't observed other differences but there may exist some. I also upload my log.txt which you can refer to. You can choose to reduce the lr by half from 0.0001 to 0.00005, and change the lr deacying rate (gamma in the 'optimizer.py') from 0.2 to 0.5.
Hello,I'm training the phoenix2014 datasets right now, but find it takes a long time, could you share your log.txt?
I notice that your feeder mode is 'test'. I haven't observed other differences but there may exist some. I also upload my log.txt which you can refer to. You can choose to reduce the lr by half from 0.0001 to 0.00005, and change the lr deacying rate (gamma in the 'optimizer.py') from 0.2 to 0.5.
Hello,I'm training the phoenix2014 datasets right now, but find it takes a long time, could you share your log.txt?
The link is here.
The default configurations are placed in the baseline.yaml. You can directly train the model following the instructions in readme.md. Besides, what's your wer now?
---Original--- From: @.> Date: Thu, Aug 17, 2023 14:39 PM To: @.>; Cc: @.**@.>; Subject: Re: [hulianyuyy/CorrNet] Train problem (Issue #4)
Hello, I can't reproduce your accuracy on PHOENIX14. What is your parameter setting on PHOENIX14, such as lr and weights_deacy? Could you provide the configuration file on PHOENIX14?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Thank you very much for your answer, WER can now reach 18.9%.
Hi, I use CorrNet to train on the CCL-Daily dataset, but oss and WER have always been very big, each epoch drops very little, I hope you can give some advice.