UniModal4Reasoning / StructEqTable-Deploy

A High-efficiency Open-source Toolkit for Table-to-Latex Task
Apache License 2.0
136 stars 10 forks source link

Is the dataset publicly available? #9

Open JasonKitty opened 2 months ago

JasonKitty commented 2 months ago

I can only find DocGenome dataset, is table recognition trained on this dataset?

Thank you!

PrinceVictor commented 2 months ago

Yes. Our model is trained on the DocGenome dataset. Specifically, we extracted the table data from DocGenome to fine-tune our model.

Thank you for your interest in our work! Let me know if you have any further questions.

JasonKitty commented 2 months ago

Yes. Our model is trained on the DocGenome dataset. Specifically, we extracted the table data from DocGenome to fine-tune our model.

Thank you for your interest in our work! Let me know if you have any further questions.

Thank you for your reply! I have two more questions.

  1. The article mentions that table recognition and formula recognition both use the same model as Pix2Struct. Are these models trained separately for each task?

  2. For formula recognition, Mineru uses unimernet, which adds length embedding to the decoder. Is there a similar improvement applied in table recognition?

PrinceVictor commented 2 months ago

Thank you for your questions.

  1. Separate models trained for table and formula recognition.
  2. Unlike unimernet, the is no length embedding added to decoder.
JasonKitty commented 2 months ago

Thank you for your questions.

  1. Separate models trained for table and formula recognition.
  2. Unlike unimernet, the is no length embedding added to decoder.

Thank u! One more question, the paper mentions tokenizer from nougat, is there an update here? Because I find that the two Tokenizers are not the same.

PrinceVictor commented 2 months ago

We currently utilize the tokenizer from Pix2Struct, but we have expanded the vocabulary to support the Chinese language better.

JasonKitty commented 2 months ago

We currently utilize the tokenizer from Pix2Struct, but we have expanded the vocabulary to support the Chinese language better.

Table recognition is a token-intensive task, and I think using a dedicated tokenizer can streamline expression, improve inference speed and training performance.

PrinceVictor commented 2 months ago

Thank you for your valuable suggestion. We will continue to improve the model for better performance.

JasonKitty commented 2 months ago

Thank you for your valuable suggestion. We will continue to improve the model for better performance.

Questions Regarding the Data Preparation.

  1. It is mentioned in the article that the training data consists of 500k articles. May I ask how many table image-Latex pairs were used for StructEqTable training?
  2. In table Latex, there are often many cross-references (e.g., \ref{}, \cite{}, \citep{}), and non-unique expressions (e.g., \textbf{} and \bf{}, different formatting controls that produce similar visual effects). Will such noise negatively affect the model's learning?
  3. Was any data cleaning performed on the table Latex?
  4. How were the table images annotated with the corresponding Latex text?
  5. What are the shortcomings of this model?

Looking forward to your reply. Thank you!