Open daquilnp opened 4 years ago
Good questions :wink:
1
or 1.5
does not really matter.Does that answer your questions?
Yesthat answers everything, thank you! :) I assumed using corner coordinates was to save computation time but I wanted to make sure. Also, what accuracy did you get on the SynthText validation set?
Happy, I could answer your questions! We got about 91% validation accuracy on the SynthText validation set.
Awesome. Thank you again :) I'll try and aim for a similar accuracy, although I also cannot get the SynthAdd dataset (the authors of dataset have not been monitoring their issues :S)
Follow up question. When you say 91% do you mean percentage of correct characters or percentage of correct words? And does that include case sensitivity?
91% is the case insensitive word accuracy, should have told immediately :sweat_smile:
Hello @Bartzi, Im currently looking at the output of the chainer local net with the pretrained model.
I've noticed that the bounding boxes find characters in images from right to left. Is that what is supposed to happen?
I've also noticed theres a lot of overlap between the characters. Do you remove the duplicates some way?
The predictions of the characters from right to left is one of the interesting things the model does on its own. It is learning by itself which reading direction to use, as such right to left is perfectly acceptable. I also think that this is a better choice for the network, since it essentially is a sequence-to-sequence model and it operates like a stack.
Yes, there is a lot of overlap and this is also intended. There is no need to remove the duplicates. This is what the transformer is for. The encoder takes all features from the rois and hands them to the decoder, which then predicts the characters without overlap.
Ok, that make sense. I just wanted to make sure I was running it correctly.
As for overlapping, I am aware that the transformer's decoder is meant to remove duplicates. However, I was testing the pretrained recognition model on this image from the Synth validation dataset
And the result from the decoder was: :::::::fffeeeerrrXXXXXX
Interesting... do you have some code that I could have a look at?
Ok very strange. I cleaned up my code to send to you. When I ran it, I got the correct result. I might have introduced an error in my original implementation and fixed it during the clean up. It looks like everything works as expected. I am getting result: Xref: :)
ah, good :wink:
For future reference. The issue arises if you mix up num_chars and num_words. Intuitively, num_chars should be 23 and num_words should be 1, but for some reason in my npz they were reversed.
Yeah, that's right! It is interesting, though that the model still provides a good prediction if you set those two numbers wrongly.
@Bartzi First of all, thanks for your code! Regarding num_chars and num_words in *.npz, I checked synthadd.npz and mjsynth.npz, in both cases num_chars = 1 and num_words = 23. Intuitively it should be swapped, is this correct? I have tried it, but got an error in Reshape layer. Thank you!
Yes, this is actually intended :sweat_smile: Our original work came from the idea that we want to extract one box per word with multiple characters. However, we thought what if we only have a single word, but want to localize individual characters? The simplest solution is to redefine the way you look at it. Now we want to find a maximum of 23 words (each character is defined to be a single word) with one character each.
This is the way you have to think about it.
I see, it's clear now! Thank you!
Dear @Bartzi , sorry to disturb you, another question. According to your paper Localization network try to find and "crop" individual characters, for example FOOTBALL word at the Fig.1. In my case I see another behavior, looks like Localization network crops the regions with sets of characters and moreover these regions are significantly overlapped. Please see the example below. As far as I understand there is no limitation for that, whole system can work like this, but I'm a bit confused because of different behavior. Thank you!
PS training is converged with 96% of accuracy, so my model works fine!
Hmm, it seems to me that the localization network never felt the need to converge to localize individual characters as the task for the recognition network was too simple. You could try a very simple trick: Start a new train run, but instead of random initialization of all parameters, you load the pre-trained weights of the localizer. In this way the localizer is encouraged to improve again because the recognition network behaves badly.
We did this in previous work and it worked very well in such cases.
You could also try to lower the learning rate of the recognition network to encourage the localization network to try harder to make t easier for the recognition network.
Thank you! using pre-trained weights looks very promising, will try! Also, I was thinking about the image above too, you are right, the recognition task is very simple - license plate recognition sample, so no curved or some other complicated text at all, basically no need to apply an array of affine matrices, only one for the whole image is enough, maybe this is the reason.
Yes, it might not be necessary to use the affine matrices. You could also just train the recognition network on patches you extracted from a regular sliding window. So basically our model without the localization network where you provide the input to the recognition network yourself, using a simple and regular sliding window approach.
Thank you!
Hi @Bartzi Thank you for the good advise, usage of pre-trained Localizer weights helps a lot! and final accuracy is about 2% better
Nice, that's good to hear. And the image looks the way it is supposed to :+1:
Hey again, I had a few questions about the loss functions you used for the Localization net during training.
In the Out Of Image loss calculation you +/- 1.5 to the bbox instead of +/- 1 (like your paper), why do you do this?
Also why are you using corner coordinates for loss calculations?
Was the DirectionLoss used in your paper?