poloclub / unitable

UniTable: Towards a Unified Table Foundation Model
https://arxiv.org/abs/2403.04822
MIT License
276 stars 15 forks source link

dataset Annotation #6

Open tzktok opened 1 month ago

tzktok commented 1 month ago

I want fine tune the unitable model for my custom dataset...How to do the annotaion process is any tool available for ur annotation methods.. @matthewdhull @polochau @haekyu @helblazer811 @ShengYun-Peng

ShengYun-Peng commented 1 month ago

Hi @tzktok, thanks for your interest! As stated in the paper, we used publicly available datasets while training UniTable. I will share the papers of these datasets below and their annotation processes may be helpful to you!

PubTabNet: https://github.com/ibm-aur-nlp/PubTabNet SynthTabNet: https://arxiv.org/abs/2203.01017 FinTabNet: https://developer.ibm.com/exchanges/data/all/fintabnet/

whalefa1I commented 1 month ago

Hi @tzktok, thanks for your interest! As stated in the paper, we used publicly available datasets while training UniTable. I will share the papers of these datasets below and their annotation processes may be helpful to you!

PubTabNet: https://github.com/ibm-aur-nlp/PubTabNet SynthTabNet: https://arxiv.org/abs/2203.01017 FinTabNet: https://developer.ibm.com/exchanges/data/all/fintabnet/

I have used my own data to fine-tune the model, and the results have been very good. Thank you for your efforts. However, the inference speed does not meet my requirements. Are there any good methods to speed up inference? I have tried using TensorRT, but the improvement was not significant. Should I consider adding a KV cache to reduce the time spent on inference?

ShengYun-Peng commented 1 month ago

Glad to know the finetuning went well! Yes, UniTable was implemented with vanilla transformer architecture. A kv-cache like the llama3 architecture here will largely speed up the inference. Interested in opening a PR?

whalefa1I commented 1 month ago

Glad to know the finetuning went well! Yes, UniTable was implemented with vanilla transformer architecture. A kv-cache like the llama3 architecture here will largely speed up the inference. Interested in opening a PR?

I will try to add this part, and when all goes well I will submit the pr~

ShengYun-Peng commented 1 month ago

Thanks! I would recommend starting from implementing the kv-cache logic in the pipeline notebook and compare speed.

tzktok commented 1 month ago

Hi @tzktok, thanks for your interest! As stated in the paper, we used publicly available datasets while training UniTable. I will share the papers of these datasets below and their annotation processes may be helpful to you! PubTabNet: https://github.com/ibm-aur-nlp/PubTabNet SynthTabNet: https://arxiv.org/abs/2203.01017 FinTabNet: https://developer.ibm.com/exchanges/data/all/fintabnet/

I have used my own data to fine-tune the model, and the results have been very good. Thank you for your efforts. However, the inference speed does not meet my requirements. Are there any good methods to speed up inference? I have tried using TensorRT, but the improvement was not significant. Should I consider adding a KV cache to reduce the time spent on inference?

How u annotate your own dataset ?

pincusz commented 1 month ago

I'm also interested in training using my own dataset but have no idea where to start for annotating it. Any advice? I originally tried using the full_pipeline notebook but it did not create an accurate table from the image.

lerndeep commented 1 month ago

I also wants to train with custom dataset. could you please share custom dataset preparation python file?

lerndeep commented 1 month ago

@whalefa1I Could you please provide training script for unitable large for box,cell and contain train module size?

Sanster commented 1 month ago

@whalefa1I May I ask how much data did you use to train in your scenario?

lerndeep commented 1 month ago

@whalefa1I could you please share the custom dataset preparation script?

whalefa1I commented 1 month ago

@whalefa1I May I ask how much data did you use to train in your scenario?

30k maybe?Only Bbox model~

whalefa1I commented 1 month ago

@whalefa1I Could you please provide training script for unitable large for box,cell and contain train module size?

Maybe as long as you find the corresponding option in the CONFIG.mk file and configure it when running the Makefile with the exp name [EXP_$*], it should work, right? Do you want to convert it into a regular training script instead of using Hydra for configuration?

whalefa1I commented 1 month ago

@whalefa1I could you please share the custom dataset preparation script?

Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.

final_label_dataset = []
# data_from_platform is a json file labeld by Labelme
for data_from_platform in tqdm(data_from_platform_list):
    tmp_bbox_label = {}
    tmp_bbox_label['filename'] = data_from_platform["imagePath"]
    tmp_bbox_label['split'] = 'train'
    shapes = data_from_platform["shapes"]
    cells = []
    for sh in shapes:
        label = sh["label"]
        points = sh["points"]
        points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])]
        cells.append({"tokens": label, "bbox": points})
        tmp_bbox_label['cells'] = cells
    final_label_dataset.append(tmp_bbox_label)

with open(r'./train_data4unitable.json', 'w') as file:
    for data in final_label_dataset :
        file.write(json.dumps( data ) + '\n' )
lerndeep commented 1 month ago

@whalefa1I

@whalefa1I could you please share the custom dataset preparation script?

Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.

final_label_dataset = []
# data_from_platform is a json file labeld by Labelme
for data_from_platform in tqdm(data_from_platform_list):
    tmp_bbox_label = {}
    tmp_bbox_label['filename'] = data_from_platform["imagePath"]
    tmp_bbox_label['split'] = 'train'
    shapes = data_from_platform["shapes"]
    cells = []
    for sh in shapes:
        label = sh["label"]
        points = sh["points"]
        points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])]
        cells.append({"tokens": label, "bbox": points})
        tmp_bbox_label['cells'] = cells
    final_label_dataset.append(tmp_bbox_label)

with open(r'./train_data4unitable.json', 'w') as file:
    for data in final_label_dataset :
        file.write(json.dumps( data ) + '\n' )

1) using this you train for cell detection and content recognition right? 2) Have you did pertaining or only fine-tuning?

In my case table have around 1000 cell so I don't know it will be good to fine-tune only by increase maxlen only work fine or not?

whalefa1I commented 1 month ago

Thanks! I would recommend starting from implementing the kv-cache logic in the pipeline notebook and compare speed.

It seems that because the decoder has only 4 layers or there may be an error in my implementation, the acceleration effect is not significant, achieving only a 7% speedup (varying with the number of bboxes). Due to the differences between the custom implementation of attention and the native torch attention (the MAE loss of the two types of attention is below e-8 in the first layer, but increases to 0.9 after subsequent cross-attention), it may be necessary to retrain the model. Additionally, I have replaced components using the llama decoder. If you are interested, I can send it to you.

lerndeep commented 1 month ago

@whalefa1I could you please share the custom dataset preparation script?

Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.

final_label_dataset = []
# data_from_platform is a json file labeld by Labelme
for data_from_platform in tqdm(data_from_platform_list):
    tmp_bbox_label = {}
    tmp_bbox_label['filename'] = data_from_platform["imagePath"]
    tmp_bbox_label['split'] = 'train'
    shapes = data_from_platform["shapes"]
    cells = []
    for sh in shapes:
        label = sh["label"]
        points = sh["points"]
        points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])]
        cells.append({"tokens": label, "bbox": points})
        tmp_bbox_label['cells'] = cells
    final_label_dataset.append(tmp_bbox_label)

with open(r'./train_data4unitable.json', 'w') as file:
    for data in final_label_dataset :
        file.write(json.dumps( data ) + '\n' )

Thank you for your sharing. have you train for table structure part or not? if yes how you labeled dataset at HTML format where colspan rowspan are presented?

whalefa1I commented 1 month ago

@whalefa1I

@whalefa1I could you please share the custom dataset preparation script?

Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.

final_label_dataset = []
# data_from_platform is a json file labeld by Labelme
for data_from_platform in tqdm(data_from_platform_list):
    tmp_bbox_label = {}
    tmp_bbox_label['filename'] = data_from_platform["imagePath"]
    tmp_bbox_label['split'] = 'train'
    shapes = data_from_platform["shapes"]
    cells = []
    for sh in shapes:
        label = sh["label"]
        points = sh["points"]
        points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])]
        cells.append({"tokens": label, "bbox": points})
        tmp_bbox_label['cells'] = cells
    final_label_dataset.append(tmp_bbox_label)

with open(r'./train_data4unitable.json', 'w') as file:
    for data in final_label_dataset :
        file.write(json.dumps( data ) + '\n' )
  1. using this you train for cell detection and content recognition right?
  2. Have you did pertaining or only fine-tuning?

In my case table have around 1000 cell so I don't know it will be good to fine-tune only by increase maxlen only work fine or not?

This is an interesting issue. I am currently using the llama decoder to reproduce the model, and its special positional encoding might have some capability for length-extension. However, for your case, I think it might be difficult. The out-of-distribution (OOD) phenomenon is likely to be significant, and you may need more data to support 4k token output.

whalefa1I commented 1 month ago

@whalefa1I could you please share the custom dataset preparation script?

Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.

final_label_dataset = []
# data_from_platform is a json file labeld by Labelme
for data_from_platform in tqdm(data_from_platform_list):
    tmp_bbox_label = {}
    tmp_bbox_label['filename'] = data_from_platform["imagePath"]
    tmp_bbox_label['split'] = 'train'
    shapes = data_from_platform["shapes"]
    cells = []
    for sh in shapes:
        label = sh["label"]
        points = sh["points"]
        points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])]
        cells.append({"tokens": label, "bbox": points})
        tmp_bbox_label['cells'] = cells
    final_label_dataset.append(tmp_bbox_label)

with open(r'./train_data4unitable.json', 'w') as file:
    for data in final_label_dataset :
        file.write(json.dumps( data ) + '\n' )

Thank you for your sharing. have you train for table structure part or not? if yes how you labeled dataset at HTML format where colspan rowspan are presented?

This is related to our annotation format. We generate HTML tags from bbox annotations using a set of heuristic rules, so the entire process only requires a bbox model.

lerndeep commented 1 month ago

@whalefa1I could you please share the custom dataset preparation script?

Our data annotation format differs from the open-source TSR task annotation method, but both are composed of two coordinate points.

final_label_dataset = []
# data_from_platform is a json file labeld by Labelme
for data_from_platform in tqdm(data_from_platform_list):
    tmp_bbox_label = {}
    tmp_bbox_label['filename'] = data_from_platform["imagePath"]
    tmp_bbox_label['split'] = 'train'
    shapes = data_from_platform["shapes"]
    cells = []
    for sh in shapes:
        label = sh["label"]
        points = sh["points"]
        points = [int(points[0][0]), int(points[0][1]), int(points[2][0]), int(points[2][1])]
        cells.append({"tokens": label, "bbox": points})
        tmp_bbox_label['cells'] = cells
    final_label_dataset.append(tmp_bbox_label)

with open(r'./train_data4unitable.json', 'w') as file:
    for data in final_label_dataset :
        file.write(json.dumps( data ) + '\n' )

Thank you for your sharing. have you train for table structure part or not? if yes how you labeled dataset at HTML format where colspan rowspan are presented?

This is related to our annotation format. We generate HTML tags from bbox annotations using a set of heuristic rules, so the entire process only requires a bbox model.

could you please let me know the process or code of heuristic rules to generate HTML from labelme json format?

it will be really helpful for me.

Sanster commented 1 month ago

@whalefa1I May I ask how much data did you use to train in your scenario?

30k maybe?Only Bbox model~

Thank you for your reply, I would also like to ask you a question, in your scenario, what are the advantages of using unitable, which obtains bbox coordinates through autoregressive methods, compared to using object detection models (such as YOLO)?

BTW, I added a decoder with kv-cache in this PR https://github.com/poloclub/unitable/pull/11, which can achieve about a 30% improvement in inference speed with batch_size=1.

image

whalefa1I commented 1 month ago

@whalefa1I May I ask how much data did you use to train in your scenario?

30k maybe?Only Bbox model~

Thank you for your reply, I would also like to ask you a question, in your scenario, what are the advantages of using unitable, which obtains bbox coordinates through autoregressive methods, compared to using object detection models (such as YOLO)?

BTW, I added a decoder with kv-cache in this PR #11, which can achieve about a 30% improvement in inference speed with batch_size=1.

image

  1. Intuitively, direct object detection might not yield good results due to the presence of wireless tables or merged cells. Therefore, I have not trained a direct object detection model, but I am currently exploring related projects. This project has inspired me to modify the data annotation format, thereby reducing model calls. I have also compared other open-sourced tsr model and believe that the pretrained effects of unitable might be well transferred to my own dataset.
  2. Thank you for your PR on kv cache. May I ask if you are able to achieve the same effects as the original weights? I suspect there might be an issue with my implementation, as I have obtained inconsistent outputs and results compared to yours.
Sanster commented 1 month ago

@whalefa1I May I ask how much data did you use to train in your scenario?

30k maybe?Only Bbox model~

Thank you for your reply, I would also like to ask you a question, in your scenario, what are the advantages of using unitable, which obtains bbox coordinates through autoregressive methods, compared to using object detection models (such as YOLO)? BTW, I added a decoder with kv-cache in this PR #11, which can achieve about a 30% improvement in inference speed with batch_size=1. image

  1. Intuitively, direct object detection might not yield good results due to the presence of wireless tables or merged cells. Therefore, I have not trained a direct object detection model, but I am currently exploring related projects. This project has inspired me to modify the data annotation format, thereby reducing model calls. I have also compared other open-sourced tsr model and believe that the pretrained effects of unitable might be well transferred to my own dataset.
  2. Thank you for your PR on kv cache. May I ask if you are able to achieve the same effects as the original weights? I suspect there might be an issue with my implementation, as I have obtained inconsistent outputs and results compared to yours.

I checked the results of the images in the dataset/mini_pubtabnet/val directory through full_pipeline.ipynb, and based on the visualization results, the output is the same as the original model.