opendatalab / DocLayout-YOLO

DocLayout-YOLO: Enhancing Document Layout Analysis through Diverse Synthetic Data and Global-to-Local Adaptive Perception
https://huggingface.co/spaces/opendatalab/DocLayout-YOLO
GNU Affero General Public License v3.0
140 stars 8 forks source link

DocLayout-YOLO

Official PyTorch implementation of DocLayout-YOLO.

Zhiyuan Zhao, Hengrui Kang, Bin Wang, Conghui He

arXiv Online Demo Hugging Face Spaces

Abstract

Document Layout Analysis is crucial for real-world document understanding systems, but it encounters a challenging trade-off between speed and accuracy: multimodal methods leveraging both text and visual features achieve higher accuracy but suffer from significant latency, whereas unimodal methods relying solely on visual features offer faster processing speeds at the expense of accuracy. To address this dilemma, we introduce DocLayout-YOLO, a novel approach that enhances accuracy while maintaining speed advantages through document-specific optimizations in both pre-training and model design. For robust document pre-training, we introduce the Mesh-candidate BestFit algorithm, which frames document synthesis as a two-dimensional bin packing problem, generating the large-scale, diverse DocSynth-300K dataset. Pre-training on the resulting DocSynth-300K dataset significantly improves fine-tuning performance across various document types. In terms of model optimization, we propose a Global-to-Local Controllable Receptive Module that is capable of better handling multi-scale variations of document elements. Furthermore, to validate performance across different document types, we introduce a complex and challenging benchmark named DocStructBench. Extensive experiments on downstream datasets demonstrate that DocLayout-YOLO excels in both speed and accuracy. 


News πŸš€πŸš€πŸš€

2024.10.21 πŸŽ‰πŸŽ‰ Online demo available on πŸ€—Huggingface.

2024.10.18 πŸŽ‰πŸŽ‰ DocLayout-YOLO is implemented in PDF-Extract-Kit for document context extraction.

2024.10.16 πŸŽ‰πŸŽ‰ Paper now available on ArXiv.

Quick Start

Online Demo is now available. For local development, follow steps below:

1. Environment Setup

Follow these steps to set up your environment:

conda create -n doclayout_yolo python=3.10
conda activate doclayout_yolo
pip install -e .

Note: If you only need the package for inference, you can simply install it via pip:

pip install doclayout-yolo

2. Prediction

You can make predictions using either a script or the SDK:

We provide model fine-tuned on DocStructBench for prediction, which is capable of handing various document types. Model can be downloaded from here and example images can be found under assets/example.


Note: For PDF content extraction, please refer to PDF-Extract-Kit and MinerU.

Note: Thanks to Neils, DocLayout-YOLO now supports implementation directly from πŸ€—Huggingface, you can load model as follows:

filepath = hf_hub_download(repo_id="juliozhao/DocLayout-YOLO-DocStructBench", filename="doclayout_yolo_docstructbench_imgsz1024.pt")
model = YOLOv10(filepath)

or directly load using from_pretrained:

model = YOLOv10.from_pretrained("juliozhao/DocLayout-YOLO-DocStructBench")

more details can be found at this PR.

Training and Evaluation on Public DLA Datasets

Data Preparation

  1. specify the data root path

Find your ultralytics config file (for Linux user in $HOME/.config/Ultralytics/settings.yaml) and change datasets_dir to project root path.

  1. Download prepared yolo-format D4LA and DocLayNet data from below and put to ./layout_data:
Dataset Download
D4LA link
DocLayNet link

the file structure is as follows:

./layout_data
β”œβ”€β”€ D4LA
β”‚Β Β  β”œβ”€β”€ images
β”‚Β Β  β”œβ”€β”€ labels
β”‚Β Β  β”œβ”€β”€ test.txt
β”‚Β Β  └── train.txt
└── doclaynet
    β”œβ”€β”€ images
 Β Β  β”œβ”€β”€ labels
 Β Β  β”œβ”€β”€ val.txt
 Β Β  └── train.txt

Training and Evaluation

Training is conducted on 8 GPUs with a global batch size of 64 (8 images per device). The detailed settings and checkpoints are as follows:

Dataset Model DocSynth300K Pretrained? imgsz Learning rate Finetune Evaluation AP50 mAP Checkpoint
D4LA DocLayout-YOLO 1600 0.04 command command 81.7 69.8 checkpoint
D4LA DocLayout-YOLO 1600 0.04 command command 82.4 70.3 checkpoint
DocLayNet DocLayout-YOLO 1120 0.02 command command 93.0 77.7 checkpoint
DocLayNet DocLayout-YOLO 1120 0.02 command command 93.4 79.7 checkpoint

The DocSynth300K pretrained model can be downloaded from here. Change checkpoint.pt to the path of model to be evaluated during evaluation.

Acknowledgement

The code base is built with ultralytics and YOLO-v10.

Thanks for their great work!

Citation

@misc{zhao2024doclayoutyoloenhancingdocumentlayout,
      title={DocLayout-YOLO: Enhancing Document Layout Analysis through Diverse Synthetic Data and Global-to-Local Adaptive Perception}, 
      author={Zhiyuan Zhao and Hengrui Kang and Bin Wang and Conghui He},
      year={2024},
      eprint={2410.12628},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2410.12628}, 
}

@article{wang2024mineru,
  title={MinerU: An Open-Source Solution for Precise Document Content Extraction},
  author={Wang, Bin and Xu, Chao and Zhao, Xiaomeng and Ouyang, Linke and Wu, Fan and Zhao, Zhiyuan and Xu, Rui and Liu, Kaiwen and Qu, Yuan and Shang, Fukai and others},
  journal={arXiv preprint arXiv:2409.18839},
  year={2024}
}