Bartzi / kiss

Code for the paper "KISS: Keeping it Simple for Scene Text Recognition"
GNU General Public License v3.0
110 stars 29 forks source link

Loss Functions #9

Open daquilnp opened 4 years ago

daquilnp commented 4 years ago

Hey again, I had a few questions about the loss functions you used for the Localization net during training.

Bartzi commented 4 years ago

Good questions :wink:

Does that answer your questions?

daquilnp commented 4 years ago

Yesthat answers everything, thank you! :) I assumed using corner coordinates was to save computation time but I wanted to make sure. Also, what accuracy did you get on the SynthText validation set?

Bartzi commented 4 years ago

Happy, I could answer your questions! We got about 91% validation accuracy on the SynthText validation set.

daquilnp commented 4 years ago

Awesome. Thank you again :) I'll try and aim for a similar accuracy, although I also cannot get the SynthAdd dataset (the authors of dataset have not been monitoring their issues :S)

daquilnp commented 4 years ago

Follow up question. When you say 91% do you mean percentage of correct characters or percentage of correct words? And does that include case sensitivity?

Bartzi commented 4 years ago

91% is the case insensitive word accuracy, should have told immediately :sweat_smile:

daquilnp commented 4 years ago

Hello @Bartzi, Im currently looking at the output of the chainer local net with the pretrained model.

Bartzi commented 4 years ago

The predictions of the characters from right to left is one of the interesting things the model does on its own. It is learning by itself which reading direction to use, as such right to left is perfectly acceptable. I also think that this is a better choice for the network, since it essentially is a sequence-to-sequence model and it operates like a stack.

Yes, there is a lot of overlap and this is also intended. There is no need to remove the duplicates. This is what the transformer is for. The encoder takes all features from the rois and hands them to the decoder, which then predicts the characters without overlap.

daquilnp commented 4 years ago

Ok, that make sense. I just wanted to make sure I was running it correctly.

As for overlapping, I am aware that the transformer's decoder is meant to remove duplicates. However, I was testing the pretrained recognition model on this image from the Synth validation dataset xref

And the result from the decoder was: :::::::fffeeeerrrXXXXXX

Bartzi commented 4 years ago

Interesting... do you have some code that I could have a look at?

daquilnp commented 4 years ago

Ok very strange. I cleaned up my code to send to you. When I ran it, I got the correct result. I might have introduced an error in my original implementation and fixed it during the clean up. It looks like everything works as expected. I am getting result: Xref: :)

Bartzi commented 4 years ago

ah, good :wink:

daquilnp commented 4 years ago

For future reference. The issue arises if you mix up num_chars and num_words. Intuitively, num_chars should be 23 and num_words should be 1, but for some reason in my npz they were reversed.

Bartzi commented 4 years ago

Yeah, that's right! It is interesting, though that the model still provides a good prediction if you set those two numbers wrongly.

borisgribkov commented 3 years ago

@Bartzi First of all, thanks for your code! Regarding num_chars and num_words in *.npz, I checked synthadd.npz and mjsynth.npz, in both cases num_chars = 1 and num_words = 23. Intuitively it should be swapped, is this correct? I have tried it, but got an error in Reshape layer. Thank you!

Bartzi commented 3 years ago

Yes, this is actually intended :sweat_smile: Our original work came from the idea that we want to extract one box per word with multiple characters. However, we thought what if we only have a single word, but want to localize individual characters? The simplest solution is to redefine the way you look at it. Now we want to find a maximum of 23 words (each character is defined to be a single word) with one character each.

This is the way you have to think about it.

borisgribkov commented 3 years ago

I see, it's clear now! Thank you!

borisgribkov commented 3 years ago

Dear @Bartzi , sorry to disturb you, another question. According to your paper Localization network try to find and "crop" individual characters, for example FOOTBALL word at the Fig.1. In my case I see another behavior, looks like Localization network crops the regions with sets of characters and moreover these regions are significantly overlapped. Please see the example below. As far as I understand there is no limitation for that, whole system can work like this, but I'm a bit confused because of different behavior. Thank you! image

PS training is converged with 96% of accuracy, so my model works fine!

Bartzi commented 3 years ago

Hmm, it seems to me that the localization network never felt the need to converge to localize individual characters as the task for the recognition network was too simple. You could try a very simple trick: Start a new train run, but instead of random initialization of all parameters, you load the pre-trained weights of the localizer. In this way the localizer is encouraged to improve again because the recognition network behaves badly.

We did this in previous work and it worked very well in such cases.

Bartzi commented 3 years ago

You could also try to lower the learning rate of the recognition network to encourage the localization network to try harder to make t easier for the recognition network.

borisgribkov commented 3 years ago

Thank you! using pre-trained weights looks very promising, will try! Also, I was thinking about the image above too, you are right, the recognition task is very simple - license plate recognition sample, so no curved or some other complicated text at all, basically no need to apply an array of affine matrices, only one for the whole image is enough, maybe this is the reason.

Bartzi commented 3 years ago

Yes, it might not be necessary to use the affine matrices. You could also just train the recognition network on patches you extracted from a regular sliding window. So basically our model without the localization network where you provide the input to the recognition network yourself, using a simple and regular sliding window approach.

borisgribkov commented 3 years ago

Thank you!

borisgribkov commented 3 years ago

Hi @Bartzi Thank you for the good advise, usage of pre-trained Localizer weights helps a lot! image and final accuracy is about 2% better

Bartzi commented 3 years ago

Nice, that's good to hear. And the image looks the way it is supposed to :+1: