Open lijo381 opened 4 years ago
Are your test images comparable to the training images?
The training generator will create samples that resemble printed characters on paper, this will be unsuitable if your test data is scene text.
I am a little bit limited in GPU time right now due to another project, but I can try to train a model next week if you can provide specific specs.
Yes I saved the images of the generator as jpgs and tested on them.
Secondly my original data is scene text and since I didn't have much scene text data in the first place, so used the generator to generate data. The training is converging properly but testing on jpgs are not giving good results.
In order to verify it I generated around 10k images and saved them and trained the model on them training had conveged but on testing on the same 10k was giving around 85 percent accuracy.
I am trying to predict from a patch what is the price of a commodity. The patch follows a standard pattern of mrp followed by the price (eg MRP. 375.00) It would be great if you could help
I trained a CRNN model from scratch using the generator . Each word has a pattern it always starts with a keyword followed by some characters and numbers (eg. PATTERN xy123) . The training loss is great but during testing if I read from an image and try prediction the results are really poor. What could be the probable reason.