hulianyuyy / CorrNet

Continuous Sign Language Recognition with Correlation Network (CVPR 2023)
85 stars 14 forks source link

Train problem #4

Closed xiaoshimeng closed 10 months ago

xiaoshimeng commented 1 year ago

Hi, I use CorrNet to train on the CCL-Daily dataset, but oss and WER have always been very big, each epoch drops very little, I hope you can give some advice.

hulianyuyy commented 1 year ago

Could you please show me the log file? I haven't encoutered this issue before.

xiaoshimeng commented 1 year ago

log.txt

xiaoshimeng commented 1 year ago

I have uploaded the log.txt file, but there are only three contents of epoch in it. At that time, WER and loss were always very large, so the training was terminated. I hope you can give me your advice.

hulianyuyy commented 1 year ago

I notice that your feeder mode is 'test'. I haven't observed other differences but there may exist some. I also upload my log.txt which you can refer to. You can choose to reduce the lr by half from 0.0001 to 0.00005, and change the lr deacying rate (gamma in the 'optimizer.py') from 0.2 to 0.5.

xiaoshimeng commented 1 year ago

Thank you very much.

xiaoshimeng commented 1 year ago

Hello, if I want to train CorrNet on CSL-Daily, do I need to change the model structure as I did in Readme to evaluate CSL-Daily?

hulianyuyy commented 1 year ago

I set CorrNet like that to save meomory. If you have enough memory, you don't need to change the network structure.

xiaoshimeng commented 1 year ago

In your paper, does CSL-Daily WER modify the network structure or does it come out without modifying the network structure?

hulianyuyy commented 1 year ago

The WER is reported with only two proposed modules (the modified network structure).

xiaoshimeng commented 1 year ago

image What I mean is that the original two modules have been modified as shown in the picture above.

hulianyuyy commented 1 year ago

Originally, CorrNet has three modules. For CSL-Daily, we only keep two due to memory constraint and report its WER in the paper. You can try using all three module if the memory is enough.

xiaoshimeng commented 1 year ago

I see. Thank you very much for your answer.

RitaKANG commented 1 year ago

I notice that your feeder mode is 'test'. I haven't observed other differences but there may exist some. I also upload my log.txt which you can refer to. You can choose to reduce the lr by half from 0.0001 to 0.00005, and change the lr deacying rate (gamma in the 'optimizer.py') from 0.2 to 0.5.

Hello,I'm training the phoenix2014 datasets right now, but find it takes a long time, could you share your log.txt?

hulianyuyy commented 1 year ago

I notice that your feeder mode is 'test'. I haven't observed other differences but there may exist some. I also upload my log.txt which you can refer to. You can choose to reduce the lr by half from 0.0001 to 0.00005, and change the lr deacying rate (gamma in the 'optimizer.py') from 0.2 to 0.5.

Hello,I'm training the phoenix2014 datasets right now, but find it takes a long time, could you share your log.txt?

The link is here.

hulianyuyy commented 10 months ago

The default configurations are placed in the baseline.yaml. You can directly train the model following the instructions in readme.md. Besides, what's your wer now?

---Original--- From: @.> Date: Thu, Aug 17, 2023 14:39 PM To: @.>; Cc: @.**@.>; Subject: Re: [hulianyuyy/CorrNet] Train problem (Issue #4)

Hello, I can't reproduce your accuracy on PHOENIX14. What is your parameter setting on PHOENIX14, such as lr and weights_deacy? Could you provide the configuration file on PHOENIX14?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

xiaoshimeng commented 10 months ago

Thank you very much for your answer, WER can now reach 18.9%.