Open tzktok opened 6 months ago
Hi @tzktok, thanks for your interest! As stated in the paper, we used publicly available datasets while training UniTable. I will share the papers of these datasets below and their annotation processes may be helpful to you!
PubTabNet: https://github.com/ibm-aur-nlp/PubTabNet SynthTabNet: https://arxiv.org/abs/2203.01017 FinTabNet: https://developer.ibm.com/exchanges/data/all/fintabnet/
Hi @tzktok, thanks for your interest! As stated in the paper, we used publicly available datasets while training UniTable. I will share the papers of these datasets below and their annotation processes may be helpful to you!
PubTabNet: https://github.com/ibm-aur-nlp/PubTabNet SynthTabNet: https://arxiv.org/abs/2203.01017 FinTabNet: https://developer.ibm.com/exchanges/data/all/fintabnet/
I have used my own data to fine-tune the model, and the results have been very good. Thank you for your efforts. However, the inference speed does not meet my requirements. Are there any good methods to speed up inference? I have tried using TensorRT, but the improvement was not significant. Should I consider adding a KV cache to reduce the time spent on inference?
Glad to know the finetuning went well! Yes, UniTable was implemented with vanilla transformer architecture. A kv-cache like the llama3 architecture here will largely speed up the inference. Interested in opening a PR?
Glad to know the finetuning went well! Yes, UniTable was implemented with vanilla transformer architecture. A kv-cache like the llama3 architecture here will largely speed up the inference. Interested in opening a PR?
I will try to add this part, and when all goes well I will submit the pr~
Thanks! I would recommend starting from implementing the kv-cache logic in the pipeline notebook and compare speed.
Hi @tzktok, thanks for your interest! As stated in the paper, we used publicly available datasets while training UniTable. I will share the papers of these datasets below and their annotation processes may be helpful to you! PubTabNet: https://github.com/ibm-aur-nlp/PubTabNet SynthTabNet: https://arxiv.org/abs/2203.01017 FinTabNet: https://developer.ibm.com/exchanges/data/all/fintabnet/
I have used my own data to fine-tune the model, and the results have been very good. Thank you for your efforts. However, the inference speed does not meet my requirements. Are there any good methods to speed up inference? I have tried using TensorRT, but the improvement was not significant. Should I consider adding a KV cache to reduce the time spent on inference?
How u annotate your own dataset ?
I'm also interested in training using my own dataset but have no idea where to start for annotating it. Any advice? I originally tried using the full_pipeline notebook but it did not create an accurate table from the image.
I also wants to train with custom dataset. could you please share custom dataset preparation python file?
@whalefa1I Could you please provide training script for unitable large for box,cell and contain train module size?
@whalefa1I May I ask how much data did you use to train in your scenario?
@whalefa1I could you please share the custom dataset preparation script?
@whalefa1I May I ask how much data did you use to train in your scenario?
30k maybe?Only Bbox model~
@whalefa1I Could you please provide training script for unitable large for box,cell and contain train module size?
Maybe as long as you find the corresponding option in the CONFIG.mk file and configure it when running the Makefile with the exp name [EXP_$*], it should work, right? Do you want to convert it into a regular training script instead of using Hydra for configuration?
@whalefa1I could you please share the custom dataset preparation script?
Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.
final_label_dataset = []
# data_from_platform is a json file labeld by Labelme
for data_from_platform in tqdm(data_from_platform_list):
tmp_bbox_label = {}
tmp_bbox_label['filename'] = data_from_platform["imagePath"]
tmp_bbox_label['split'] = 'train'
shapes = data_from_platform["shapes"]
cells = []
for sh in shapes:
label = sh["label"]
points = sh["points"]
points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])]
cells.append({"tokens": label, "bbox": points})
tmp_bbox_label['cells'] = cells
final_label_dataset.append(tmp_bbox_label)
with open(r'./train_data4unitable.json', 'w') as file:
for data in final_label_dataset :
file.write(json.dumps( data ) + '\n' )
@whalefa1I
@whalefa1I could you please share the custom dataset preparation script?
Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.
final_label_dataset = [] # data_from_platform is a json file labeld by Labelme for data_from_platform in tqdm(data_from_platform_list): tmp_bbox_label = {} tmp_bbox_label['filename'] = data_from_platform["imagePath"] tmp_bbox_label['split'] = 'train' shapes = data_from_platform["shapes"] cells = [] for sh in shapes: label = sh["label"] points = sh["points"] points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])] cells.append({"tokens": label, "bbox": points}) tmp_bbox_label['cells'] = cells final_label_dataset.append(tmp_bbox_label) with open(r'./train_data4unitable.json', 'w') as file: for data in final_label_dataset : file.write(json.dumps( data ) + '\n' )
1) using this you train for cell detection and content recognition right? 2) Have you did pertaining or only fine-tuning?
In my case table have around 1000 cell so I don't know it will be good to fine-tune only by increase maxlen only work fine or not?
Thanks! I would recommend starting from implementing the kv-cache logic in the pipeline notebook and compare speed.
It seems that because the decoder has only 4 layers or there may be an error in my implementation, the acceleration effect is not significant, achieving only a 7% speedup (varying with the number of bboxes). Due to the differences between the custom implementation of attention and the native torch attention (the MAE loss of the two types of attention is below e-8 in the first layer, but increases to 0.9 after subsequent cross-attention), it may be necessary to retrain the model. Additionally, I have replaced components using the llama decoder. If you are interested, I can send it to you.
@whalefa1I could you please share the custom dataset preparation script?
Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.
final_label_dataset = [] # data_from_platform is a json file labeld by Labelme for data_from_platform in tqdm(data_from_platform_list): tmp_bbox_label = {} tmp_bbox_label['filename'] = data_from_platform["imagePath"] tmp_bbox_label['split'] = 'train' shapes = data_from_platform["shapes"] cells = [] for sh in shapes: label = sh["label"] points = sh["points"] points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])] cells.append({"tokens": label, "bbox": points}) tmp_bbox_label['cells'] = cells final_label_dataset.append(tmp_bbox_label) with open(r'./train_data4unitable.json', 'w') as file: for data in final_label_dataset : file.write(json.dumps( data ) + '\n' )
Thank you for your sharing. have you train for table structure part or not? if yes how you labeled dataset at HTML format where colspan rowspan are presented?
@whalefa1I
@whalefa1I could you please share the custom dataset preparation script?
Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.
final_label_dataset = [] # data_from_platform is a json file labeld by Labelme for data_from_platform in tqdm(data_from_platform_list): tmp_bbox_label = {} tmp_bbox_label['filename'] = data_from_platform["imagePath"] tmp_bbox_label['split'] = 'train' shapes = data_from_platform["shapes"] cells = [] for sh in shapes: label = sh["label"] points = sh["points"] points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])] cells.append({"tokens": label, "bbox": points}) tmp_bbox_label['cells'] = cells final_label_dataset.append(tmp_bbox_label) with open(r'./train_data4unitable.json', 'w') as file: for data in final_label_dataset : file.write(json.dumps( data ) + '\n' )
- using this you train for cell detection and content recognition right?
- Have you did pertaining or only fine-tuning?
In my case table have around 1000 cell so I don't know it will be good to fine-tune only by increase maxlen only work fine or not?
This is an interesting issue. I am currently using the llama decoder to reproduce the model, and its special positional encoding might have some capability for length-extension. However, for your case, I think it might be difficult. The out-of-distribution (OOD) phenomenon is likely to be significant, and you may need more data to support 4k token output.
@whalefa1I could you please share the custom dataset preparation script?
Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.
final_label_dataset = [] # data_from_platform is a json file labeld by Labelme for data_from_platform in tqdm(data_from_platform_list): tmp_bbox_label = {} tmp_bbox_label['filename'] = data_from_platform["imagePath"] tmp_bbox_label['split'] = 'train' shapes = data_from_platform["shapes"] cells = [] for sh in shapes: label = sh["label"] points = sh["points"] points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])] cells.append({"tokens": label, "bbox": points}) tmp_bbox_label['cells'] = cells final_label_dataset.append(tmp_bbox_label) with open(r'./train_data4unitable.json', 'w') as file: for data in final_label_dataset : file.write(json.dumps( data ) + '\n' )
Thank you for your sharing. have you train for table structure part or not? if yes how you labeled dataset at HTML format where colspan rowspan are presented?
This is related to our annotation format. We generate HTML tags from bbox annotations using a set of heuristic rules, so the entire process only requires a bbox model.
@whalefa1I could you please share the custom dataset preparation script?
Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.
final_label_dataset = [] # data_from_platform is a json file labeld by Labelme for data_from_platform in tqdm(data_from_platform_list): tmp_bbox_label = {} tmp_bbox_label['filename'] = data_from_platform["imagePath"] tmp_bbox_label['split'] = 'train' shapes = data_from_platform["shapes"] cells = [] for sh in shapes: label = sh["label"] points = sh["points"] points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])] cells.append({"tokens": label, "bbox": points}) tmp_bbox_label['cells'] = cells final_label_dataset.append(tmp_bbox_label) with open(r'./train_data4unitable.json', 'w') as file: for data in final_label_dataset : file.write(json.dumps( data ) + '\n' )
Thank you for your sharing. have you train for table structure part or not? if yes how you labeled dataset at HTML format where colspan rowspan are presented?
This is related to our annotation format. We generate HTML tags from bbox annotations using a set of heuristic rules, so the entire process only requires a bbox model.
could you please let me know the process or code of heuristic rules to generate HTML from labelme json format?
it will be really helpful for me.
@whalefa1I May I ask how much data did you use to train in your scenario?
30k maybe?Only Bbox model~
Thank you for your reply, I would also like to ask you a question, in your scenario, what are the advantages of using unitable, which obtains bbox coordinates through autoregressive methods, compared to using object detection models (such as YOLO)?
BTW, I added a decoder with kv-cache in this PR https://github.com/poloclub/unitable/pull/11, which can achieve about a 30% improvement in inference speed with batch_size=1.
@whalefa1I May I ask how much data did you use to train in your scenario?
30k maybe?Only Bbox model~
Thank you for your reply, I would also like to ask you a question, in your scenario, what are the advantages of using unitable, which obtains bbox coordinates through autoregressive methods, compared to using object detection models (such as YOLO)?
BTW, I added a decoder with kv-cache in this PR #11, which can achieve about a 30% improvement in inference speed with batch_size=1.
@whalefa1I May I ask how much data did you use to train in your scenario?
30k maybe?Only Bbox model~
Thank you for your reply, I would also like to ask you a question, in your scenario, what are the advantages of using unitable, which obtains bbox coordinates through autoregressive methods, compared to using object detection models (such as YOLO)? BTW, I added a decoder with kv-cache in this PR #11, which can achieve about a 30% improvement in inference speed with batch_size=1.
- Intuitively, direct object detection might not yield good results due to the presence of wireless tables or merged cells. Therefore, I have not trained a direct object detection model, but I am currently exploring related projects. This project has inspired me to modify the data annotation format, thereby reducing model calls. I have also compared other open-sourced tsr model and believe that the pretrained effects of unitable might be well transferred to my own dataset.
- Thank you for your PR on kv cache. May I ask if you are able to achieve the same effects as the original weights? I suspect there might be an issue with my implementation, as I have obtained inconsistent outputs and results compared to yours.
I checked the results of the images in the dataset/mini_pubtabnet/val directory through full_pipeline.ipynb, and based on the visualization results, the output is the same as the original model.
Hey @whalefa1I I'm wondering if you can assist.
I have a dataset that comprises of PDFs with matching XML in SVG tag format that is D3.js derived.
I have bbox and tokens for all the text, but since the images have to be resized, how do I ensure that the existing annotations will correspond with the downsampled images when fine-tuning?
Is the SVG tag structure useful? Would I need to add the SVG tags to the existing HTML vocab file?
Also, some tables overflow into different pages. When converting pdf2image, how can I maintain consistency of box locations for each image to source PDF?
Hey @whalefa1I I'm wondering if you can assist.
I have a dataset that comprises of PDFs with matching XML in SVG tag format that is D3.js derived.
I have bbox and tokens for all the text, but since the images have to be resized, how do I ensure that the existing annotations will correspond with the downsampled images when fine-tuning?
Is the SVG tag structure useful? Would I need to add the SVG tags to the existing HTML vocab file?
Also, some tables overflow into different pages. When converting pdf2image, how can I maintain consistency of box locations for each image to source PDF?
Hey @whalefa1I
- Could you please share some samples of your dataset so that I can see if they can be converted into the data format for my training?
Sure. I've shared a sample PDF with matching XML doc (SVG tag).
- Since I have not finetuned the html model and content model, I don't know if this will help, but I tried to add the tag “border=1” of the wired/wireless table to the html tag in the early months. This requires adding the tag to the vocab.json file, and it works, so if you want the html model to generate related tokens, you can consider adding SVG tags to vocab file;
Thanks, will try it this out.
Hi, have you trained bbox with your own dataset? Can you share the specific steps?
@whalefa1I May I ask how much data did you use to train in your scenario?
30k maybe?Only Bbox model~
Hi, have you trained bbox with your own dataset? Can you share the specific steps?
I want fine tune the unitable model for my custom dataset...How to do the annotaion process is any tool available for ur annotation methods.. @matthewdhull @polochau @haekyu @helblazer811 @ShengYun-Peng