zzzDavid / ICDAR-2019-SROIE

ICDAR 2019 Robust Reading Challenge on Scanned Receipts OCR and Information Extraction
MIT License
384 stars 132 forks source link

Task 3 Data Information #14

Open lyrab96 opened 4 years ago

lyrab96 commented 4 years ago

Hello, can you please provide some information on how the dicts and keys pth files were created. I am trying to use the model on my own data but am failing to do so (I already have the other box, img & key files)

aureliensimon commented 4 years ago

Hi, The dict are composed like so : it's an dictionary, for each keys it contains two informations :

If you want to create your own training file :

The create_data function is located in task3/src/my_data.py line 139

Karim-Baig commented 4 years ago

Could you please help me with providing an example of what a "texts" Would be along with an image, Is it the output of hocr, Or just text saved in a word document?

aureliensimon commented 4 years ago

The text can be either be extracted by the already existing text file (provided by the SROIE challenge, where you have the text and the position). But since Task 3 don't need this information, the function sort_text (file my_data.py line 109), it takes as input the text file and return only the text with \n to separate lines.

If you don't want to use the provided text files and rather use your own image, you can use an OCR engine such as Amazon Textract, Tesseract, or many other to extract the text from the image, (Be sure when you extract the text to convert it uppercase) and provide this text (so basically a string containing \n) to the create_data function (file my_data.py line 139), where you replace the txt_files by the text you have extracted.

This also mean that you are not forced to save the text you extracted from an image in a text file, just tweak the create_data function to have as an input a path to a folder containing all the image you want to create a dataset from and create an array where for each image you can the extracted text corresponding

NISH1001 commented 4 years ago

@Karim-Baig I think we don't need images for task 3. Task 3 uses RNN for character-level classification. I guess that's also one reason why the task doesn't work for more complex documents. related issue

aureliensimon commented 4 years ago

Yes, since Task 3 does not require images but only text, you can do the text extraction process as a standalone program instead of doing it while creating the training file

Karim-Baig commented 4 years ago

@NISH1001 had mentioned about passing both the text and positions as embedding to a CNN in order to better localize the key-value pairs in documents . Is there any implementation of the same of what you mentioned? I saw LayoutLM, but there the pipeline for creating and predicting on custom dataset was not clear! CharGrid : I was not able to find any open implementation of the same.

NISH1001 commented 4 years ago

@Karim-Baig one possibility is to use some character-level embedding. In this case (task 3), it's already doing that using RNN. However, I presume that we could pre-train those layers in someways with a lot of text before doing the downstream (classification) tasks. That might help it to generalize I guess.

On another project (not related to this thread), I am using fasttext-based embedding with Graph Neural Network for classification. The results are way better than the RNN-based architecture. There I generated the embedding in unsupervised manner with all the documents. And eventually, use those embedding for the classification (along with some custom features for each word/token).

NISH1001 commented 4 years ago

@NISH1001 had mentioned about passing both the text and positions as embedding to a CNN in order to better localize the key-value pairs in documents . Is there any implementation of the same of what you mentioned? I saw LayoutLM, but there the pipeline for creating and predicting on custom dataset was not clear! CharGrid : I was not able to find any open implementation of the same.

About CharGrid: I haven't tried that either. One variation I have tried is using fasttext embedding to UNet. Directly adding a 64-dimension vector to the input image. This bumps up the input channel to 67 (64 + 3). However, the training is very inefficient because of the high dimensionality. I saw one paper doing the same with 32+3 input channels. Tried that. still, training was very inefficient (both memory and time).

anitchakraborty commented 3 years ago

I have placed the test files into: tmp/task3-test(347p) for images and : tmp/text.task1&2-test(361p) for text files containing co-ordinates and extracted text exactly in the format of ICDAR dataset.

When I run the my_data.pyto create the test_dict.pth, it runs fine and shows no error. However, when I'm running the test.py using the trained model.pth file, it shows:

Traceback (most recent call last):

  File "test.py", line 44, in <module>
    test()
  File "test.py", line 27, in test
    oupt = model(text_tensor)
  File "C:\Users\1632613\AppData\Local\conda\conda\envs\gpoc\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\1632613\Pictures\task3\my_models\__init__.py", line 13, in forward
    embedded = self.embed(inpt)
  File "C:\Users\1632613\AppData\Local\conda\conda\envs\gpoc\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\1632613\AppData\Local\conda\conda\envs\gpoc\lib\site-packages\torch\nn\modules\sparse.py", line 158, in forward
    self.norm_type, self.scale_grad_by_freq, self.sparse)
  File "C:\Users\1632613\AppData\Local\conda\conda\envs\gpoc\lib\site-packages\torch\nn\functional.py", line 1916, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self

anyidea? @aureliensimon @patrick22414 @Karim-Baig @lyrab96

4ff4n commented 2 years ago

@anitchakraborty, I am facing the same issue, how did you mange to pass this issue? please help!