Holmeyoung / crnn-pytorch

Pytorch implementation of CRNN (CNN + RNN + CTCLoss) for all language OCR.
MIT License
378 stars 105 forks source link

Best tuning for certain applications #2

Closed mariembenslama closed 5 years ago

mariembenslama commented 5 years ago

I have a dataset composed of English, japanese and korean characters (3340 characters in sum because of japanese kanji).

I can't seem to find the perfect parameters for such a problem, the accuracy is 0.000 mostly.

I tried an lr = 0.0001, epochs = 900 and batch size = 2. However the accuracy is still not very good.

I'm wondering, when you have a large number of classes, what's the best way to train the model and changing the parameters? --> do we take it easy and give small values?

Holmeyoung commented 5 years ago

@mariembenslama Hi,

  1. Check your data.

    What is the number of you train sample? In my case, i used 5*10^6 sample to train 1000 characters. To get satisfying result, we need enough clean data.

  2. Check your params.

    Some suggestions,

    • stage1

      nepoch = 1000
      batchSize = 64
      lr = 0.001
      
      displayInterval = 100
      valInterval = 1000
      saveInterval = 1000

      Set lr = 0.001 , we need a bigger lr to avoid getting local optimal solution at first.

      Set batchSize = 64 , small batchsize(eg: 2) will slow down the convergence speed and make it difficult to find the proper direction of gradient drop.

      Set nepoch = 1000 , this setting should be connected with displayInterval, valInterval, saveInterval . Print the val accuracy after valInterval and save model after saveInterval, when the accuracy get up and down, kill the training process manually, into stage2

    • stage2

      lr = 0.0001
      pretrained = 'path/to/your/model'

      Yes, just set lr = 0.0001 to make the result more stable. Load the model from stage1 to prevent training from zero.

    • about stage1, 2

      nepoch can be big enough, because you can kill the process manually after saving the good performance model. So set saveInterval to a proper number, too small will waste the disk, while too big will miss the model. It should be the same with valInterval , after getting the accuracy, then save the model.

  3. Check your loss and accuracy.

    Some suggestions

    loss value is more important than accuracy!

    If your training loss become smaller and smaller, it's OK. If the loss become smaller and smaller but the accuracy become up and down, you can turn to use a smaller lr and load the trained model to continue training.

    If your training loss doesn't converge, enmmm,

    • too little data
    • mistake with data
    • some metaphysical problem

SORRY to reply now, we are in different time zone.

Hope this can help you.

Holmeyoung commented 5 years ago

@mariembenslama have you solved the accuracy problem?

mariembenslama commented 5 years ago

I'm still following your proposed solution so I'm creating a dataset with 50^6 * 4 images with variable text length < 26 right now so it's going to take sometime for that.

I'm using google colaboratory.

However I agree your solution will give a good accuracy as to follow it :)

Thank you very much for the help 😀😀😀