Open PR-Iyyer opened 6 years ago
Hi,
your code was probably set for usage of gapdata/large which uses large sizes of images (60x120 px). If you want to use smaller sizes, you have to change the size of input placeholder. Just change this line:
x = tf.placeholder(tf.float32, [None, 7200], name='x')
to
x = tf.placeholder(tf.float32, [None, 3600], name='x')
The images are flatten, so the resulting size is 60x60 = 3600.
Hope this helps, feel free to ask if there is anything else.
sure, thank you so much,.. Actually i got it fixed. Then i got dimension error. i tried changing it to 'reshape_images = tf.reshape(x, [-1, 32, 2, 1])` which helped me to start training now with your data.
But for my data, its giving such an error.
InvalidArgumentError: Input to reshape is a tensor with 32400 values, but the requested shape requires a multiple of 64 [[Node: Reshape = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_x_0_2, Reshape/shape)]]
Ou, you reshaping it wrongly, at reshape to:
tf.reshape(x, [-1, 60, 60, 1])
Actually I did it before and got the following error. Hence i tried 32,2.
InvalidArgumentError (see above for traceback): logits and labels must have the same first dimension, got logits shape [32,2] and labels shape [64] [[Node: sparse_softmax_cross_entropy_loss/xentropy/xentropy = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:localhost/replica:0/task:0/device:CPU:0"](add_3, _arg_Placeholder_0_0)]]
Ok, I think I may know the problem, could you please share the code you are running (using gist or something similar.
I fix some bugs in loading images and added settings section. In settings you should be able to edit the size of slider and other variables.
If you want to use your own data, create a folder in data/gapdet/large/
where you place your images named as label_timestamp.jpg (label is 0 or 1). Images should be 60x120 px, the final crop is done by slider variable in code (height is fixed right now to 60px).
thank you so much.. Let me check it out and i shall update you at the earliest,
how to prepare your own gap classifier data?
First, it depends on what gap classifier do you want to train. I would recommend training the GapClassifier-BiRNN.ipynb
because it gives the best accuracy. For training of this model you need data provided in words2
folder. This folder contains images along with text files (with same name) which contains array of positions (x coordinates) of vertical lines separating letters.
To extend this folder you can use WordClassDM.py
script where you specify data folder as folder containing raw word images. The script loads and normalize the images and then shows them, then you can manually (click and drag) place lines to position where should be letters separated. The lines with image are then saved by pressing s
key.
So basically gap classifier will predict where my gap is .. right?? if it is then why are you using slider in there??
On Thu, 28 Jun 2018 at 6:19 PM Břetislav Hájek notifications@github.com wrote:
First, it depends on what gap classifier do you want to train. I would recommend training the GapClassifier-BiRNN.ipynb because it gives the best accuracy. For training of this model you need data provided in words2 folder. This folder contains images along with text files (with same name) which contains array of positions (x coordinates) of vertical lines separating letters.
To extend this folder you can use WordClassDM.py script where you specify data folder as folder containing raw word images. The script loads and normalize the images and then shows them, then you can manually (click and drag) place lines to position where should be letters separated. The lines with image are then saved by pressing s key.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Breta01/handwriting-ocr/issues/16#issuecomment-401023018, or mute the thread https://github.com/notifications/unsubscribe-auth/AKa5BVLBsRKK0i5pSaEwriPzBpBQ3lC5ks5uBNDxgaJpZM4Sj7lQ .
It is because I am not predicting the array of x-coordinates, but I am predicting whether or not there is gap on the slide. I think it is more efficient than predicting the the array, but you can try it the other way.
Right now, I am feeding an array of images (slides) into the classifier and I use slider to extract these images from word image. These slides (patches) are overlapping and are processed by CNN before they are feed into the RNN which evaluates each of the slides if there is or isn't the gap. If you want you can replace it by CNN network which will extract these slides (patches) or tf.extract_image_patches
function. But you would have to change the code a bit more to predict the array of x-coordinates.
Thank you for the insights .
On Thu, 28 Jun 2018 at 7:01 PM Břetislav Hájek notifications@github.com wrote:
It is because I am not predicting the array of x-coordinates, but I am predicting whether or not there is gap on the slide. I think it is more efficient than predicting the the array, but you can try it the other way.
Right now, I am feeding an array of images (slides) into the classifier and I use slider to extract these images from word image. These slides (patches) are overlapping and are processed by CNN before they are feed into the RNN which evaluates each of the slides if there is or isn't the gap. If you want you can replace it by CNN network which will extract these slides (patches) or tf.extract_image_patches function. But you would have to change the code a bit more to predict the array of x-coordinates.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Breta01/handwriting-ocr/issues/16#issuecomment-401035354, or mute the thread https://github.com/notifications/unsubscribe-auth/AKa5Bf4WvlHuohfR6I5B0j_mwhSuFLd3ks5uBNqegaJpZM4Sj7lQ .
what is this doing?? Can you please breif it in GAPClassifier -biRNN.
ind = indices[i] + (-(offset % 2) * offset // 2) + ((1 - offset%2) *
offset // 2)
Avinash
On Thu, Jun 28, 2018 at 7:17 PM, Avinash Kumar sharpavinash07@gmail.com wrote:
Thank you for the insights .
On Thu, 28 Jun 2018 at 7:01 PM Břetislav Hájek notifications@github.com wrote:
It is because I am not predicting the array of x-coordinates, but I am predicting whether or not there is gap on the slide. I think it is more efficient than predicting the the array, but you can try it the other way.
Right now, I am feeding an array of images (slides) into the classifier and I use slider to extract these images from word image. These slides (patches) are overlapping and are processed by CNN before they are feed into the RNN which evaluates each of the slides if there is or isn't the gap. If you want you can replace it by CNN network which will extract these slides (patches) or tf.extract_image_patches function. But you would have to change the code a bit more to predict the array of x-coordinates.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Breta01/handwriting-ocr/issues/16#issuecomment-401035354, or mute the thread https://github.com/notifications/unsubscribe-auth/AKa5Bf4WvlHuohfR6I5B0j_mwhSuFLd3ks5uBNqegaJpZM4Sj7lQ .
Yes, it looks a little bit strange, but first in line:
targets_seq[i] = np.ones((length[i])) * NEG
Target sequence is same length as image sequence and represents label for each image in sequence. targets_seq
is initialized with zeros, so I have to calculate indexes of positive labels and change those to ones as you can see in line:
targets_seq[i][ind] = POS
In the line you referring to I was experimenting with making more positive labels around ground truth label.
For example, if you specify gap_span = 3 (3 positive labels for each ground truth label). The indices[i]
stores indexes of ground truth labels. In first iteration of loop offset
is 0, so the indices are unchanged. In second, offset
is 1, so to each ground truth indices is added -1. In third, offset
is 2, so to each ground truth indices is added 1 (for higher gap_span
it continues as -2, 2, -3, 3... and so on).
The trick to notice is that -1 // 2 == -1 (not zero)
I am getting the following error on trying to train gapClassifier. For your dataset also am getting this error.
Please help.
ValueError Traceback (most recent call last)