Unstructured-IO / unstructured

Open source libraries and APIs to build custom preprocessing pipelines for labeling, training, or production machine learning pipelines.
https://www.unstructured.io/
Apache License 2.0
9.21k stars 764 forks source link
data-pipelines deep-learning document-image-analysis document-image-processing document-parser document-parsing docx donut information-retrieval langchain llm machine-learning ml natural-language-processing nlp ocr pdf pdf-to-json pdf-to-text preprocessing

![https://pypi.python.org/pypi/unstructured/](https://img.shields.io/pypi/l/unstructured.svg) ![https://pypi.python.org/pypi/unstructured/](https://img.shields.io/pypi/pyversions/unstructured.svg) ![https://GitHub.com/unstructured-io/unstructured.js/graphs/contributors](https://img.shields.io/github/contributors/unstructured-io/unstructured) ![code_of_conduct.md](https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg) ![https://GitHub.com/unstructured-io/unstructured.js/releases](https://img.shields.io/github/release/unstructured-io/unstructured) ![https://github.com/Naereen/badges/](https://badgen.net/badge/Open%20Source%20%3F/Yes%21/blue?icon=github) [![Downloads](https://static.pepy.tech/badge/unstructured)](https://pepy.tech/project/unstructured) [![Downloads](https://static.pepy.tech/badge/unstructured/month)](https://pepy.tech/project/unstructured)

Open-Source Pre-Processing Tools for Unstructured Data

The unstructured library provides open-source components for ingesting and pre-processing images and text documents, such as PDFs, HTML, Word docs, and many more. The use cases of unstructured revolve around streamlining and optimizing the data processing workflow for LLMs. unstructured modular functions and connectors form a cohesive system that simplifies data ingestion and pre-processing, making it adaptable to different platforms and efficient in transforming unstructured data into structured outputs.

Try the Unstructured Serverless API!

Looking for better pre-processing performance and less setup? Check out our new Serverless API! The Unstructured Serverless API is our most performant API yet, delivering a more responsive, production-grade solution to better support your business and LLM needs. Head to our signup page page to get started for free.

:eight_pointed_black_star: Quick Start

There are several ways to use the unstructured library:

Run the library in a container

The following instructions are intended to help you get up and running using Docker to interact with unstructured. See here if you don't already have docker installed on your machine.

NOTE: we build multi-platform images to support both x86_64 and Apple silicon hardware. docker pull should download the corresponding image for your architecture, but you can specify with --platform (e.g. --platform linux/amd64) if needed.

We build Docker images for all pushes to main. We tag each image with the corresponding short commit hash (e.g. fbc7a69) and the application version (e.g. 0.5.5-dev1). We also tag the most recent image with latest. To leverage this, docker pull from our image repository.

docker pull downloads.unstructured.io/unstructured-io/unstructured:latest

Once pulled, you can create a container from this image and shell to it.

# create the container
docker run -dt --name unstructured downloads.unstructured.io/unstructured-io/unstructured:latest

# this will drop you into a bash shell where the Docker image is running
docker exec -it unstructured bash

You can also build your own Docker image. Note that the base image is wolfi-base, which is updated regularly. If you are building the image locally, it is possible docker-build could fail due to upstream changes in wolfi-base.

If you only plan on parsing one type of data you can speed up building the image by commenting out some of the packages/requirements necessary for other data types. See Dockerfile to know which lines are necessary for your use case.

make docker-build

# this will drop you into a bash shell where the Docker image is running
make docker-start-bash

Once in the running container, you can try things directly in Python interpreter's interactive mode.

# this will drop you into a python console so you can run the below partition functions
python3

>>> from unstructured.partition.pdf import partition_pdf
>>> elements = partition_pdf(filename="example-docs/layout-parser-paper-fast.pdf")

>>> from unstructured.partition.text import partition_text
>>> elements = partition_text(filename="example-docs/fake-text.txt")

Installing the library

Use the following instructions to get up and running with unstructured and test your installation.

At this point, you should be able to run the following code:

from unstructured.partition.auto import partition

elements = partition(filename="example-docs/eml/fake-email.eml")
print("\n\n".join([str(el) for el in elements]))

Installation Instructions for Local Development

The following instructions are intended to help you get up and running with unstructured locally if you are planning to contribute to the project.

Additionally, if you're planning to contribute to unstructured, we provide you an optional pre-commit configuration file to ensure your code matches the formatting and linting standards used in unstructured. If you'd prefer not to have code changes auto-tidied before every commit, you can use make check to see whether any linting or formatting changes should be applied, and make tidy to apply them.

If using the optional pre-commit, you'll just need to install the hooks with pre-commit install since the pre-commit package is installed as part of make install mentioned above. Finally, if you decided to use pre-commit you can also uninstall the hooks with pre-commit uninstall.

In addition to develop in your local OS we also provide a helper to use docker providing a development environment:

make docker-start-dev

This starts a docker container with your local repo mounted to /mnt/local_unstructured. This docker image allows you to develop without worrying about your OS's compatibility with the repo and its dependencies.

:clap: Quick Tour

Documentation

For more comprehensive documentation, visit https://docs.unstructured.io . You can also learn more about our other products on the documentation page, including our SaaS API.

Here are a few pages from the Open Source documentation page that are helpful for new users to review:

PDF Document Parsing Example

The following examples show how to get started with the unstructured library. The easiest way to parse a document in unstructured is to use the partition function. If you use partition function, unstructured will detect the file type and route it to the appropriate file-specific partitioning function. If you are using the partition function, you may need to install additional dependencies per doc type. For example, to install docx dependencies you need to run pip install "unstructured[docx]". See our installation guide for more details.

from unstructured.partition.auto import partition

elements = partition("example-docs/layout-parser-paper.pdf")

Run print("\n\n".join([str(el) for el in elements])) to get a string representation of the output, which looks like:


LayoutParser : A Unified Toolkit for Deep Learning Based Document Image Analysis

Zejiang Shen 1 ( (cid:0) ), Ruochen Zhang 2 , Melissa Dell 3 , Benjamin Charles Germain Lee 4 , Jacob Carlson 3 , and
Weining Li 5

Abstract. Recent advances in document image analysis (DIA) have been primarily driven by the application of neural
networks. Ideally, research outcomes could be easily deployed in production and extended for further investigation.
However, various factors like loosely organized codebases and sophisticated model configurations complicate the easy
reuse of important innovations by a wide audience. Though there have been ongoing efforts to improve reusability and
simplify deep learning (DL) model development in disciplines like natural language processing and computer vision, none
of them are optimized for challenges in the domain of DIA. This represents a major gap in the existing toolkit, as DIA
is central to academic research across a wide range of disciplines in the social sciences and humanities. This paper
introduces LayoutParser, an open-source library for streamlining the usage of DL in DIA research and applications.
The core LayoutParser library comes with a set of simple and intuitive interfaces for applying and customizing DL models
for layout detection, character recognition, and many other document processing tasks. To promote extensibility,
LayoutParser also incorporates a community platform for sharing both pre-trained models and full document digitization
pipelines. We demonstrate that LayoutParser is helpful for both lightweight and large-scale digitization pipelines in
real-word use cases. The library is publicly available at https://layout-parser.github.io

Keywords: Document Image Analysis · Deep Learning · Layout Analysis · Character Recognition · Open Source library ·
Toolkit.

Introduction

Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of document image analysis (DIA) tasks
including document image classification [11,

See the partitioning section in our documentation for a full list of options and instructions on how to use file-specific partitioning functions.

:guardsman: Security Policy

See our security policy for information on how to report security vulnerabilities.

:bug: Reporting Bugs

Encountered a bug? Please create a new GitHub issue and use our bug report template to describe the problem. To help us diagnose the issue, use the python scripts/collect_env.py command to gather your system's environment information and include it in your report. Your assistance helps us continuously improve our software - thank you!

:books: Learn more

Section Description
Company Website Unstructured.io product and company info
Documentation Full API documentation
Batch Processing Ingesting batches of documents through Unstructured

:chart_with_upwards_trend: Analytics

We’ve partnered with Scarf (https://scarf.sh) to collect anonymized user statistics to understand which features our community is using and how to prioritize product decision-making in the future. To learn more about how we collect and use this data, please read our Privacy Policy. To opt out of this data collection, you can set the environment variable SCARF_NO_ANALYTICS=true before running any unstructured commands.