Closed naveen-marthala closed 4 years ago
Hi Naveen-kumar-123,
You can follow this tutorial to create your own dataset.
Hey, how can I just upload a single image to make a prediction on it? It would be crazy to make a whole dataset for a single prediction.
You can edit the images
array in https://github.com/awslabs/handwritten-text-recognition-for-apache-mxnet/blob/master/0_handwriting_ocr.ipynb
@jonomon, I will try that and get back to you very soon. Please do not close this issue before that.
@jonomon As you had suggested, I have created images
array like:
images = [cv2.imread('/location of image/image_1.jpg'), cv2.imread('/location of image/image_2.jpg')]
This worked. Thanks
You're welcome
Good work, thanks. I am using pre-trained models to get text from images. When I was going through the codes on how to do it, I learned that my test images's format has to match the format 'IAMDastaset' class in 'ocr.utils.iam_dataset' outputs. So, How do I modify 'IAMDastaset' class in 'ocr.utils.iam_dataset' to change an input image to match test dataframe format that this class outputs or How do I get dataframe for images other than the ones in IAM dataset. I couldn't understand this class completely. So, if anyone worked on this, please help me solve this.