CosmosShadow / gptpdf

Using GPT to parse PDF
MIT License
2.76k stars 212 forks source link

gptpdf

CN doc EN doc

Using VLLM (like GPT-4o) to parse PDF into markdown.

Our approach is very simple (only 293 lines of code), but can almost perfectly parse typography, math formulas, tables, pictures, charts, etc.

Average cost per page: $0.013

This package use GeneralAgent lib to interact with OpenAI API.

pdfgpt-ui is a visual tool based on gptpdf.

Process steps

  1. Use the PyMuPDF library to parse the PDF to find all non-text areas and mark them, for example:

  1. Use a large visual model (such as GPT-4o) to parse and get a markdown file.

DEMO

  1. examples/attention_is_all_you_need/output.md for PDF examples/attention_is_all_you_need.pdf.

  2. examples/rh/output.md for PDF examples/rh.pdf.

Installation

pip install gptpdf

Usage

Local Usage

from gptpdf import parse_pdf
api_key = 'Your OpenAI API Key'
content, image_paths = parse_pdf(pdf_path, api_key=api_key)
print(content)

See more in test/test.py

Google Colab

see examples/gptpdf_Quick_Tour.ipynb

API

parse_pdf

Function:

def parse_pdf(
        pdf_path: str,
        output_dir: str = './',
        prompt: Optional[Dict] = None,
        api_key: Optional[str] = None,
        base_url: Optional[str] = None,
        model: str = 'gpt-4o',
        verbose: bool = False,
        gpt_worker: int = 1
) -> Tuple[str, List[str]]:

Parses a PDF file into a Markdown file and returns the Markdown content along with all image paths.

Parameters:

prompt = {
    "prompt": "Custom prompt text",
    "rect_prompt": "Custom rect prompt",
    "role_prompt": "Custom role prompt"
}

content, image_paths = parse_pdf(
    pdf_path=pdf_path,
    output_dir='./output',
    model="gpt-4o",
    prompt=prompt,
    verbose=False,
)

args: LLM other parameters, such as temperature, top_p, max_tokens, presence_penalty, frequency_penalty, etc.

Join Us 👏🏻

Scan the QR code below with WeChat to join our group chat or contribute.

wechat