NormXU / ERNIE-Layout-Pytorch

An unofficial Pytorch implementation of ERNIE-Layout which is originally released through PaddleNLP.
http://arxiv.org/abs/2210.06155
MIT License
96 stars 11 forks source link

Couldn't get good result with actual image #13

Closed calvinzhan closed 1 year ago

calvinzhan commented 1 year ago

I tried an actual image sample from ernie-layout official site with test_ernie_qa.py, but couldn't get good result.

What I did was:

  1. Used paddleocr to get text and bbox (for every word)
  2. Assembled the text segment bbox
  3. Replaced original context, layout, pil_image.
  4. Commented start_positions, end_positions assignment.

The start_max is bigger than end_max, so that answer is empty.

The code I changed for test_ernie_qa.py is listed below.

pretrain_torch_model_or_path = "Norm/ERNIE-Layout-Pytorch" 
doc_imag_path = "examples/resume.png"

device = torch.device("cuda:0")

def two_dimension_sort_box(box1: Bbox, box2: Bbox, vratio=0.5):
    kernel = [box1.left - box2.left, box1.top - box2.top]
    if box1.voverlap(box2) < vratio * min(box1.height, box2.height):
        kernel = [box1.top - box2.top, box1.left - box2.left]
    return kernel[0] if kernel[0] != 0 else kernel[1]

def two_dimension_sort_layout(layout1, layout2, vratio=0.54):
    return two_dimension_sort_box(layout1["bbox"], layout2["bbox"])

def construct_segment_bbox(ocr_res):
    segments = []
    for rst in ocr_res:
        left = min(rst[0][0][0], rst[0][3][0])
        top = min(rst[0][0][-1], rst[0][1][-1])
        width = max(rst[0][1][0], rst[0][2][0]) - min(rst[0][0][0], rst[0][3][0])
        height = max(rst[0][2][-1], rst[0][3][-1]) - min(rst[0][0][-1], rst[0][1][-1])
        segments.append({"bbox": Bbox(*[left, top, width, height]), "text": rst[-1][0]})
    # segments.sort(key=cmp_to_key(two_dimension_sort_layout))
    return segments

def main():
    # 构造OCR
    from paddleocr import PaddleOCR
    ocr_engine = PaddleOCR(use_angle_cls=False, show_log=False, use_gpu=True, lang="ch")
    ocr_result = ocr_engine.ocr(doc_imag_path, cls=False)
    ocr_result = ocr_result[0] if len(ocr_result) == 1 else ocr_result
    segment_result = construct_segment_bbox(ocr_result)

    context = []
    layout = []
    for segment in segment_result:
        context.append(segment["text"])
        bbox = segment["bbox"]
        layout.append([int(bbox.left), int(bbox.top), int(bbox.left + bbox.width), int(bbox.top + bbox.height)])

    pil_image = Image.open(doc_imag_path).convert("RGB")

    # initialize tokenizer
    tokenizer = ErnieLayoutTokenizerFast.from_pretrained(pretrained_model_name_or_path=pretrain_torch_model_or_path)

    # initialize feature extractor
    feature_extractor = ErnieLayoutImageProcessor(apply_ocr=False)
    processor = ERNIELayoutProcessor(image_processor=feature_extractor, tokenizer=tokenizer)

    # Tokenize context & questions
    context_encodings = processor(pil_image, context)
    question = "五百丁本次想要担任的是什么职位?"

    # tokenized_res['start_positions'] = torch.tensor([6]).to(device)
    # tokenized_res['end_positions'] = torch.tensor([12]).to(device)
   .......

resume

NormXU commented 1 year ago

The models you downloaded from huggingface are just pretraining weights. To get satisfying results on downstream tasks, such as QA and token classification, you need to fine-tune the weights on a specific dataset.

test_ernie_qa.py is just a showcase of how to use the codes on a QA task.

allanj posts a complete pipeline of fine-tuning on DocVQA that I strongly recommend reading. You can replace the tokenizer and model with ERNIE-Layout in this repo and train a model for your own task.